Table of Contents
Department of Design, Development, Environment and Materials, The Open University, UK
ABSTRACT
Mathematical methods are important for research in many aspects of acoustics. Currently, fundamental mathematical methodologies taught at undergraduate level are often advanced through independent learning by individual researchers. They develop their mathematical skills as appropriate rather than being made aware of the potential of advanced mathematical tools at the onset of their research career. Furthermore, most researchers in acoustics do not have access to master level courses to broaden their postgraduate study. Attempts to remedy this in the UK were made through summer schools held in 2003, 2005 and 2007 at Southampton and Salford Universities in the UK. The content and timetable planning, recruitment and student feedback from these Schools are reported together with general conclusions about their performance.
Environmental Directions, Brisbane, Australia
ABSTRACT
Presenting workshops, lectures and seminars on noise and vibration over the last 20 years has given the author an insight into the effectiveness of the taking on board and putting into practice' of the material studied by the participants. It has also demonstrated the need for constantly updating the study material to keep the presented material relevant and meaningful for the changing audiences. The inclusion of case studies enhances students' involvement and problem solving skills. The advancement of computer technology has made it possible to make the presentations more realistic by incorporating case studies by using audio and video of noise and its effects into power point presentations to which students and other participants can relate. Despite these advancements in study material delivery, in real life basic mistakes, which should not happen, are being observed regularly when it comes to noise assessment. This is particularly true with the use of noise dose meters and, to a lesser extent, the use of sound level meters. Basic mistakes include for noise dose meters the setting-up, attachment and removal of the instrument and for sound level meters not recording where the measurement was made and not obtaining additional information e.g. about the activities and exposure duration.
Acoustics, Aalborg University, Denmark
ABSTRACT
The master program in Acoustics (M. Sc.) from Aalborg University is taught at the Department of Electronic system. The M. Sc. program consists of three semesters with course units and problem based project work organized in groups, and a final semester for a master thesis. During the first three semesters, the learning objectives are distributed between courses with independent examination, and a semester project. Each semester has a theme the projects must comply with. Either supervisors, students or industry propose the problem that become the basis for the project work. Under supervision, the students narrow down the problem, address possible solutions, and typically implement one or more of the options for further evaluation. The courses supplement the project work by adding specific and general knowledge of the subject areas of each semester. The courses either have direct application in the project work, or are defining for the candidate’s professional profile. This presentation gives an overview of Problem Based Learning organized in groups in the M. Sc. in Acoustics program of Aalborg University. Examples of projects and course activities are presented to illustrate the relation and interaction between course and project work.
Kanazawa Institute of Technology, Ishikawa, Japan
ABSTRACT
Speech is a physiological signal which is generated by muscular motions of lung, vocal cords, larynx, jaw, tongue, and lips. Coordinated articulatory movements of these organs are so complex that they are difficult to be understood by either students or professionals. Especially for tongue, anatomical structures and functions are well studied but speech articulatory movements can be said that they are still under investigations.
To help understanding tongue shape and motions, I made a figure model of tongue using viscoelastic material of urethane rubber gel in the following procedure. At first, a cast of tongue is formed by baking clay which is hardened by heating. The shape of tongue is decided with multiple references of anatomy books, MRI images, educational models, and our real tongues. Next, I made a mold with silicon. Then, I molded and duplicated the figure model with urethane rubber gel. The model includes internal and external tongue muscles, although the current version is made as a whole shape model of tongue body consisting of combined muscles.
Compared to the current materials used in speech science education, such as drawings, pictures, videos, or human body models, the proposed tongue figure model is useful for understanding three dimensional tongue shape and internal and external tongue muscles' positions and motions, too. Because students can hold and touch the realistic tongue model and make it move and deform by pushing and pulling this viscoelastic tongue body. Pictures and explanatory texts do not make sense but the proposed model can help for students to understand anatomical structures and functions of speech articulation intuitively. From questionnaires from students in a speech science seminar, it is founded that the proposed model is an effective tool for understanding speech articulation. It can be applied not only to teach speech science, but also to elucidate speech articulation by scientists and engineers, and also to develop a tongue actuator for speaking robots.
(1) Faculty of Design, Kyushu University, Japan (2) Yamaha Corporation, Japan (3) Yamaha Business Support Corporation, Japan
ABSTRACT
This is a case study of curriculum development for technical listening training. Technical listening training is a systematic education program designed to allow prospective acoustic engineers and sound designers to enhance their auditory sensitivity. Authors established a training strategy in an acoustics related company; Yamaha Corporation. We re-organized existing, and developed new, curricula for a training suite for company employees. Discrimination, level difference identification and frequency identification training were classified as 'beginners training'. Identification of reverberation time and some application specific training were classified for 'expert training'. The company successfully conducted 9 days of training for freshman engineers. Trainee learning curves showed auditory sensitivity was improved.
(1) 2nd University of Naples, Italy (2) RWTH Aachen University, Germany (3) NTNU Trondheim, Norway (4) University of Zagreb, Croatia
ABSTRACT
EAA Summer Schools are an integral part of the Young Acousticians Programme in the European Acoustics Association. They consist of various courses on advanced level, taught by internationally recognized and distinguished experts in Acoustics, and they are integrated into a European regional conference. Furthermore, structured sessions of the conference are related to the summer school courses, thus connecting basic lectures to keynotes and other invited papers on advanced topics. Those sessions are co-chaired one experienced expert and one young acoustician, typically a doctoral student. The first EAA Summer School will be held in Ljubljana, Slovenia, with short courses on Soundscapes, Voice and Musical Acoustics, Building Acoustics, Hydroacoustics, Numerical Methods, Psychoacoustics and Ultrasound. In the presentation we will illustrate the background and the motivation of this initiative. Content, organization and future plans of integration in the European higher education in acoustics will be discussed.
Acoustical and Mechanical Engineering Laboratory (LEAM), Technical University of Catalonia, Spain
ABSTRACT
Since the European Directive on Environmental Noise 2002/49 came into effect requiring strategic agglomeration and infrastructure noise maps to be made, the demand on environmental acoustics knowledge has been boosted in Spain. Currently several groups and companies are available to carry out noise surveys and pointing out the noise causes in different streets or areas of a city. However, a second stage is starting now: once the noise causes are identified, control noise techniques needs to be applied. This fact is supported in the Spanish adaptation of this regulation known as Ley del Ruido (2003) that states that noise control techniques should be applied to minimize the acoustic emission of municipal work activities, municipal devices, infrastructures, road workThis means that a demand on noise control knowledge is arising. This knowledge is scarcely provided in the bachelor degrees currently available in Spain, but there are some master degrees focussed in acoustics. However, it is difficult to find an institution that can cover all the topics needed for a complete Acoustics curriculum. This work analyses the feasibility of creating an integral curriculum in noise control involving different Spanish research groups in order to take benefit of the expertise of each one to cover the legal and industrial needs of Acoustics knowledge. In that way, the teaching effort would be optimized and the appropriate facilities would be available, however funds would be necessary for the mobility of teachers and students.
(1) AECOM, Adelaide, SA, Australia (2) Acoustics and Vibration Unit, UNSW@ADFA, Canberra, Australia
ABSTRACT
Acoustical consulting companies frequently face the need to employ staff but find that, while there may be very good applicants with engineering and science backgrounds there are few that have any experience in acoustics. Larger consultancies can provide 'in-house' training but this is a strain on resources and smaller consultancies do not have this capacity. Any course available via the formal university system may not be available at a suitable time or location. A flexible distance learning program of study, based on the UK Institute of Acoustics Diploma, has been developed as a short course and managed via the university. A key feature of this program is that there is no need for the registrants to attend any central location at any time during the program. The early experiences with implementing the program have been influenced by the continued interest and support from the senior, experienced acoustical consultants. In this paper we will discuss the structure and experiences in the implementation of this fully flexible distance learning program.
(1) Institute of Combustion Engines and Transportation, Division of Rail Vehicles, Poznan University of Technology, Poznan, Poland (2) Institute of Acoustics, Adam Mickiewicz University, Poznan, Poland
ABSTRACT
Teaching and training of spatial orientation and mobility (SOM) is an important element of their education. Despite progress in supporting equipment technology and study on spatial orientation, blind people use still old, not always effective methods. Therefore, a method of SOM training based on environmental sounds may be a huge step in "opening" a surrounding world for them. The method will be a supplement, not a substitution, of a popular orientation method based on a white cane. A basic tool for the method is a library of sound events and vibrations'. In the library both vibration and acoustic signals, which may be helpful or disturbing for SOM are collected as well as specific sounds of places and objects, which are often visited by persons with disabilities of sight. In the first step an identification of necessary signals was done, i.e. a questionnaire about various aspects of signals helping/disturbing spatial orientation was administered to blind and visually impaired. In the next step potential signals for recording were classified according to estimated level of teaching. Next, signals were recorded using artificial head or in-the-ear microphones at the attitude of 1.6 m and 0.9 m. A survey of collected signals and their classification will be presented.
(1) Level Acoustics, Eindhoven, The Netherlands (2) Odeon, Lyngby, Denmark
ABSTRACT
The use of acoustic 3D modelling software has become increasingly popular among acousticians. Some software developers offer introduction courses for starting users. However, there is a need for more advanced courses for experienced modellers. Such a course should not only consist of lectures with the scientific background of the model, but should also give room for sharing practical experience so one can learn from one another. In this context a master class on room acoustic prediction modelling has taken place in January 2010. A significant part of this master class consisted of a modelling workshop. By working on an assignment in small groups participants were stimulated to discuss ideas and exchange knowledge.
The workshop was divided into four different parts, each part carefully tuned to the theoretical lectures in between. The workshop assignment was to compare predicted room acoustical parameters with measurement results concerning reverberation and speech intelligibility in an open plan office. Also an auralisation had to be made using multiple sound sources. The open plan office of the Laboratorium voor Akoestiek of Eindhoven University of Technology where the workshop took place served as an interesting modelling object. This room was interesting for educational reasons, since the participants were inside the room, as well as for acoustical reasons, because it consists of two coupled volumes, many details like furniture and a wide range of different materials.
In this paper the assignment will be elucidated and the results will be presented. The response of the participants and the experience of the master showed that a workshop is an indispensible part of master classes in the field of room acoustics.
School of Engineering, Edith Cowan University, Joondalup, WA, Australia
ABSTRACT
In undergraduate Physics and Engineering courses on acoustics, experiments typically involve the use of a Digital Storage Oscilloscope (DSO) and a Function Generator (FG). These relatively expensive and bulky pieces of bench top equipment make it prohibitive for external, distance, or off-campus students to be involved in experimental work, without attending a residential school. However, there is a growing demand, particularly from the Engineering sector, for courses to be more available remotely. To that end, Edith Cowan University is investigating the possibility of remote laboratory programs, which can be completed by off-campus students to ensure their Applied Physics or Engineering knowledge, is balanced by experimental experience. In this work, we show the implementation of a computer based DSO and FG, using the computers sound card. Here the PCs microphone jack is used as the DSO input, and the speaker jack is used as the FG output. In an effort to reduce the cost of implementing the experiment, we examine software available for free online. A small number of applications were compared in terms of their interface and func-tionality, for both the DSO and FG. The software system was then used to conduct a number of acoustics experiments relevant to undergraduate Physics and Engineering. These experiments include, the Physics of Music, Standing Waves in Pipes, and the Properties of Sound Waves. There are two primary benefits to the computer based system developed. The first is in terms of the enhancement to learning by students at the undergraduate level, where the knowledge learnt by off-campus students can be significantly improved with the use of practical experimental work. Secondly, remote experiments could provide additional components of laboratory work for students in on-campus subjects where resource issues are making traditional and comprehensive supervised laboratory programs hard to maintain.
(1) IEMN dpt ISEN, UMR CNRS 8520, Lille, France (2) LOMC, FRE CNRS 3102, Le Havre, France (3) FANO, FR CNRS 3110, France
ABSTRACT
Waves propagating in left-handed materials have unusual properties such as phase and group velocities of opposite signs and negative refraction index. Periodic lattices have been shown to exhibit such properties both for electromagnetic (photonic crystals) and in-fluid acoustic (phononic crystals) waves. This work addresses the question of the existence of left handed elastic waves in phononic crystals. Two-dimensional phononic crystals made of square lattices of cylindrical cavities or inclusions in a solid matrix are considered. Dispersion curves are computed using plane wave expansion method for real wave vectors in the Brillouin zone and finite element method for complex wavenumbers along a specific propagation direction. From these results, the existence and symmetry of the left-handed propagation mode in the phononic crystal is discussed and its relationship with lattice geometry and constitutive materials is analyzed.
(1) UMI Georgia Tech, George W. Woodruff School of Mechanical Engineering, Metz-Technopole, France (2) Institut FEMTO-ST, Université de Franche-Comté, Besançon, France
ABSTRACT
Phononic crystals have attracted much research interest in the last decade due to their unique properties (band gaps, etc.) and potential applications in acoustic filtering and novel transducer design, among others. Many studies have examined the acoustic wave propagation that occurs inside (infinite) phononic crystals. However, in order for phononic crystals to find application in actual devices, they must be of finite size and the diffraction that may occur on the surface of the crystal becomes important. This work presents the results of experiments performed on a 2D phononic crystal consisting of steel cylinders in a water matrix. The diffraction of bulk waves that occurs on the exterior surface of the crystal will be examined, and the surface of the crystal will be shown to function as an acoustic diffraction grating. In addition, angular scans of the diffracted fields will examine the possibility of surface wave generation along the exterior surface of the crystal. It is expected that these results will contribute to a better understanding of finite-size phononic crystals and aid in the development of devices employing such crystals.
(1) Graduate School, The University of Tokyo, Japan (2) Institute of Industrial Science, The University of Tokyo, Japan
ABSTRACT
Swept signals for acoustic measurements are widely used nowadays to obtain impulse responses of the system under test. The overall spectrum and the inverse filter that compresses the sweep into an impulse together with the background noise conditions prescribe the result's signal-noise ratio as a function of frequency. This paper proposes a time-domain sweep synthesis method using composite square and monomial power function modulated sine sweeps that can customize the resulting SNR-frequency function. Theoretical and practical aspects as well as measurement results are presented.
(1) North China Electric Power University, P.R.China (2) State Key Laboratory of Acoustics, Academia Sinica, (3) University of Western Australia, WA, Australia
ABSTRACT
In this paper, by considering the shortcomings of the current boiler pipeline leak monitoring system, a method to perfect the features of location function in the system by using multi-microphones array passive source localization techniques is presented. A model of 660MW boiler is taken for example to simulate the location results for different positions of the leak source location in the furnace of the boiler's body. Additionally, there have been given out the analysis of the effect factors, which included the effect of sound wave propagation through a combustion temperature-field inside a boiler and the reverberation factors caused by the close feature of the boiler, and then the location results of leakages source have been amended from the original results accordingly. And the conclusion is that the combustion temperature gradient field would impact more apparently on the location results of the leakage. Works in this paper may provide some reference for ideas to scolars who studied aspect to this topic.
Laboratorium voor Akoestiek en Thermische Fysica, Katholieke Universiteit Leuven, Heverlee, Belgium
ABSTRACT
Piezoelectric materials have been acting as very important functional components in sonar projectors, fluid monitors, pulse generators and surface acoustic wave devices. Moreover, piezoelectric materials have been integrated with the structural systems to form a class of smart structures and embedded as layers or fibers into multifunctional composites. Much of the interest in the subject of electro-acoustic waves is directed towards the applications in the areas of signal processing, transduction and frequency control, where transmission and reflection of acoustic energy at surfaces play an important role. In this article, the wave propagation in porous piezoelectric materials is studied. First, Christoffel equation for plane harmonic waves propagating in porous piezoelectric materials is derived. Solutions for the Christoffel equation are obtained and then those are used to study the reflection-transmission phenomenon in anisotropic piezoelectric layer which is loaded with fluid on both sides. The study finds its applications in various fields such as medical ultrasonic imaging devices, underwater sonar detectors, oil reservoir monitoring.
Industrial Research Limited, Wellington, New Zealand
ABSTRACT
The motion of an acoustic source relative to some fixed frame produces a Doppler shifting of the source frequency at a fixed point relative to that frame. For linear motion of the source greater than the speed of sound, the radiated sound forms a shock wave whose angle relative to the direction of motion varies with source speed. Some applications in acoustics involve a sound source rotating around a fixed point in space. For example, in surround sound systems, it may be desirable to generate the sound due to a sound source which moves around the listener. As another example, the Leslie speaker is a rotating loudspeaker system designed to produce amplitude and frequency modulation effects. In aeroacoustics, the noise produced by rotating propellers or rotors is of interest and the linear wave equation solution for a rotating source has some relevance. The description of rotating sources also has applicability in other disciplines such as electromagnetism and astronomy.
This paper develops a cylindrical harmonic expansion for the sound field produced by a rotating line source. The expansion has a simple form and reverts to the standard expression for a fixed line source when the rotation speed is zero. For rotational speeds where the source is supersonic, the sound field produced by the expansion produces features similar to those demonstrated for rotating supersonic point sources, such as a Mach cone emanating from the source position, a spiral cylinder within which the field produces a spiralling pattern, and an inner cusp where the circular wavefronts converge. The expansion is implemented in matlab using a truncated form of the expansion, and examples of sound fields are given for both subsonic and supersonic cases.
Predio Meguro Science Laboratory, Tokyo, Japan
ABSTRACT
We have proposed a new physical principle that is called virtual discontinuity principle of diffraction for analyzing waves diffracted by perfectly reflecting objects and formulated a model for calculating diffracted waves by a sum of two elementary diffracted waves. The model is applied to waves diffracted by a wedge and high-frequency approximate solution for diffracted waves is deducted from the model that is exactly the same as the one that has been already derived from the rigorous solution of waves diffracted by the wedge. It is rare to find the relation derived from the rigorous solution in the relations deducted from the model formulated by a top-down physical principle. Thus the principle is validated fairly by this result. The above approximate solution, however, does not work in the vicinity of shadow boundary. The role of diffracted waves lies in the compensation of discontinuity caused by the geometrical optics solution, that is, discontinuity at shadow boundary. Thus the above agreement may not be enough to validate the principle firmly.
In this presentation high-frequency approximate solution that works in the vicinity of shadow boundary is deducted from the model, whereas it is not succeeded in deriving this relation from the rigorous solution since shadow boundary in diffracted waves occurs at two angles and the angle for shadow boundary changes complicatedly as a function of wedge and source angles. On the other hand shadow boundary in elementary diffracted waves occurs at one angle and its angle equals to the source angle. This outstanding simplification enabled by the new principle makes it possible to deduct the high-frequency approximate solution near shadow boundary from the model and it is combined with the conventional one so that the high-frequency solution can be applied at any angle of observation. The accuracy of the approximate solution is examined by comparing it with the rigorous solution and that of the new approximate solution in the vicinity of shadow boundary is almost the same as that of the conventional one at far outside of shadow boundary. This would validate the new principle further since it should make the analysis remarkably simple. Lastly the implication of the new principle is discussed shortly.
Mayo clinic College of Medicine, Rochester, MN, USA
ABSTRACT
Vibro-acoustography (VA) is an emerging imaging technology. In this method, radiation force of ultrasound is used to vibrate tissue at low (kHz) frequencies. The resulting vibration produces an acoustic field that is detected by a sensitive hydrophone. VA can provide detail information at high resolution that is not available from conventional B-mode ultrasound (US) imaging. Here, we compare VA and US in breast imaging. An experimental VA system was used to image breasts of patients with known lesions of various kinds. Results were compared to US. Image quality was assessed based on contrast, resolution, lesion boundaries, and artifacts. VA images displayed breast cysts with well-defined borders. Fibroadenomas were seen with identifiable texture, and in some cases, with enhanced boundaries. Post-lumpectomy scars were displayed with characteristic structure. Some malignant masses were seen with identifiable spiculations. Compared to US, VA images were speckle free, had high contrast and high signal to noise ratio. Microcalcifications were particularly visible with VA. The combination of features offered by VA, such as lack of image speckle, enhanced lesion boundaries, and sensitivity to microcalcifications, are important advantages of VA over US for breast imaging. It is concluded that VA may become a choice modality for breast imaging.
Institut d'Electronique, de Microélectronique et de Nanotechnologies (IEMN), Université Lille 1 and UMR CNRS 8520, Avenue Poincaré, 59652 Villeneuve d'Ascq cedex, France
ABSTRACT
Acoustic waves generated at the surface of a solid substrate can induce deformation, motion and even atomization of partially wetting droplets. The characteristic time scales associated with the droplets response strongly differ from the acoustic period, suggesting the existence of nonlinear coupling between acoustic waves and droplets dynamics. If different behaviors have been observed in different experimental conditions (droplet size, acoustic wave frequency, wetting properties of the liquid), the underlying physics remains unclear. To understand it, a parametric experimental study [P. Brunet et al., Phys. Rev. E, 81, 036315 (2010)] has been performed at a fixed frequency of 20 MHz, by varying the droplet size, the liquid viscosity and the acoustic wave intensity. In these experiments, the free surface of the droplet is modified in three different way: first a breaking of its symmetry, second global oscillations of the droplet and finally small amplitude and higher frequency "trembling modes". To explain all these deformations, two classical nonlinear acoustic driving can ve invoked: first the radiation pressure and second the acoustic streaming. The relative importance of these nonlinear phenomena strongly depends on the frequency considered. At 20 MHz, the acoustic wave is multiply reflected into the droplet and therefore the acoustic radiation pressure plays an important role. At higher frequencies, the acoustic wave hardly reaches the surface and the radiation pressure plays no role. With our experiments, we show that while both acoustic streaming and radiation pressure can induce the asymmetry of the droplet, global oscillations only appear when acoustic radiation is significant. We therefore exhibit for the first time the role played by the acoustic radiation pressure on droplets dynamics in a certain frequency range. The comprehension of these phenomena is of fundamental to minimize the energy required to handle droplet in view of harmless manipulation of biofluids.
(1) Institut per a la Gestió Integrada de les Zones Costaneres, Universitat Politècnica de València, Spain (2) Department of Acoustics, Faculty of Physics, Moscow State University, Russia
ABSTRACT
The study of the acoustic field characteristics generated by focusing sources, both in linear and nonlinear regime, is an active field of research as they are relevant in most of the ultrasonic applications in medicine and industry. Particularly, the linear shift phenomenon (the distance between the geometrical focus of the focused source and the on-axis maximum pressure position in linear regime, real focus) was explained by Lucas and Muir in 1982 and corrected by Makov et al. in 2006 based on the parabolic approximation to the ordinary wave equation. Also, the nonlinear shift phenomenon (the movement of the pressure and intensity maxima position along the axis of focused acoustic beams under increasing driving voltages) has been related and interpreted in previous works. But, although the nonlinear shift has been observed and explained in previous studies, till the moment it has not been published a specific experiment with the objective to study, experimentally and numerically, the focal region of medium and high Fresnel number transducers, and the magnitude of the this shift. It is important to cover this region of focusing as the majority of the medical devices are there. In this work we evaluate the nonlinear shift of an ultrasonic beam with medium Fresnel number (NF = 6), both in pressure and intensity, as well as we demonstrate that the nonlinear shift is able to move the on axis maximum pressure location beyond the geometrical focus.
LAUM (Laboratoire d'Acoustique de l'Université du Maine), Université du Maineand CNRS, Le Mans, France
ABSTRACT
The characterization of damage in structural heterogeneous materials as concrete, rocks, or composites by classical linear acoustical methods, based on the measurement of ultrasonic wave velocities and/or attenuation, does not generally give the expected sensitivity to early damage detection. As such, acoustical Nonlinear methods appear like an interesting alternative. Nonlinear effects can be observed through the distortion of an ultrasonic sine wave when propagating in a medium. In that case, higher harmonics are created and classical nonlinearity predicts that the resonance frequency of the fundamental resonance mode (Young's mode) changes. In this contribution we present a NonLinear Resonance Spectroscopy (NLRS) approach and use some NLRS features as Resonance frequency shift and Q-factor change as a function of the peak amplitude to characterize damage in concrete and polymer-based composite. Materials are characterized at intact and gradually damaged states. Besides, damage was monitored using the Acoustic Emission (AE) generated by the material during the damage process. A classification of the AE signals is proposed to identify the different damage mechanisms and to understand their contribution to the evolution of the NonLinear behaviour of the materials under investigation. Furthermore, another NonLinear phenomenon we investigated in relation with damage is Acoustical Slow Dynamics (ASD) which correspond to the response of the material when an external high drive harmonic acoustic stressing applied to the material is removed. In the case of hysteretic materials the initial properties are not recovered instantaneously but take a given time, which depends on the perturbation level as well as the materials integrity. In this contribution we report observations of ASD behaviour corresponding to a polymer-based composite sample taken at the intact as well as progressively damaged states. ASD measurements are correlated to Acoustic Emission data recorded during the different damage steps. With the help of a proposed classification procedure of AE hits, damage mechanisms are identified and then correlated to the global material ASD relaxation. Original relaxation features are then identified for every damage mechanism. More particularly, relaxation time and frequency shift have been found to be very sensitive to damage creation and development for polymer-based composite and concrete. This work shows the relevance of this approach in developing new highly sensitive methods for Non Destructive Testing (NDT) and Structural Health Monitoring (SHM) purpose.
Faculty Of Physics, M.V. Lomonosov Moscow State University, Moscow, Russia
ABSTRACT
Experimental results for the behavior of both linear and nonlinear elastic properties of polyacrylamide polymerization process are shown. Polymerization process has several stages: initiation - first appearance of active polymerization spots, chain growth - consequent joining of monomer molecules to active spots, chain joining - attachment of double monomers to chains. During the 80 minute polymerization process the initial liquid solution is transformed into gel with the different internal structure. For the diagnostics of elastic properties of initial solution during the polymerization process the automated ultrasound device employing impulse method of measurement has been used. Amplitudes of longitudinal acoustical wave at f=5MHZ and its second harmonic at f=10MHz has been measured simultaneously as well as the change of wave velocity dependence on time passed since the start of polymerization. Measurements of the amplitude at excitation frequency as a function of time allowed to calculate the change of absorption. The measurements of wave amplitudes at excitation frequency and at its second harmonic provided the change in nonlinear acoustical parameter which characterizes the non harmonic nature of molecular interaction in polymers. Density of the material and longitudinal velocity of acoustical wave closely resemble that of water and respectively equals to 1003 kg/m3 and 1500 m/sec. The increase in velocity at the beginning of polymerization due to the presence of air bubbles in the initial solution has been observed. That was followed by slow monotonous decrease of velocity as a function of time by approximately 1% of initial value. Changes in absorption and nonlinear acoustical parameter are in fact irregular and have features similar in both parameters at a given time which is believed to be due to ongoing change in the internal structure of the initial material during the polymerization process. That anomalous behavior is observed at 15-60 minute window after the start of polymerization. It is important to note that nonlinear acoustical parameter appeared to be the most sensitive to the processes occurring during polymerization as its value has changed for over 15%. The experimental results are being discussed.
Department Of Acoustics, Faculty Of Physics, M.V. Lomonosov Moscow State University, Moscow, Russia
ABSTRACT
One of the interesting and perspective trends if modern acoustics is research of nonlinear processes caused with presence of mesoscale inhomogenuities and material defect structure. Presence of mesoscale inhomogenuities in solids leads to appearance of some new physical properties not presented in homogeneous solids. The example for that are such quantum phenomena as negative magnetoresistance, quantum galvanomagnetic effect, etc. The experiments conducted by number of authors have shown the defects of supramolecular structure of solids give rise to the so-called structure nonlinearity, which has local behavior and may exceed the physical nonlinearities due two lattice anharmonicity by two or three orders of magnitude. However, there still is no universally accepted definition of the quantitative characteristics of structure nonlinearity, such as, e.g., the nonlinear acoustic parameter is for traveling waves. Numerous experiments only reveal the tendency and allow no quantitative comparison of the results. We have analyzed elastic nonlinearity of solids with micro- and nanosacale defects and characteristics features of its manifestations. The meaning of the experimentally measured nonlinear parameters of a medium is discussed. The difference in meaning between the local nonlinearity, which is measured in the vicinity of a single defect and depends on the size of the region of averaging, and the effective volume nonlinearity of the medium containing numerous defects is emphasized. The local nonlinearity arising at the tip of a crack is calculated; this nonlinearity decreases with an increase in the region of second harmonic generation. The volume nonlinearity is calculated for a solid containing spherical cavities. The volume nonlinearity is also calculated for a medium containing infinitely thin cracks in the form of circular disks, which assume the shape of ellipsoids in the course of the crack opening. It has been shown that in the presence of an ensemble of disk shaped cracks (the disks are parallel to each other), contrary to the case with cavities, the amplification of nonlinearity does not depend on Poisson's ratio and linear elastic moduli of the medium. Hence, the estimations have shown the increase in nonlinearity in the presence of cracks can be greater than the nonlinearity increase in the presence of spherical cavities.
Nizhny Novgorod Branch of Mechanical Engineering Research Institute of Russian Academy of Sciences, Nizhny Novgorod, Russia
ABSTRACT
The propagation of longitudinal magnetoelastic waves in a rod is under our consideration.
Magnetoelasticity is a scientific branch which arose on the junction of mechanics of deformable bodies, electrodynamics and acoustics. It studies dynamic processes arising during interaction between electromagnetic and deformational fields.
The nonlinear Bernoulli's model of a rod has been used for describing longitudinal oscillations. The rod assumed an ideal conductor. For the research we've got the evolutionary equation from the system of equations of magneto-elasticity. For that we entered a small parameter into the system. The obtained evolutionary equation represents Riemann equation with regard to axial deformation.
Profile of the Riemann wave is corrupting along with propagation because different wave's pieces have different velocity. That is why at a certain moment of time the wave turns over. Under this model the time when the wave turns over depends on the value of the external magnetic field.
The profile of the wave has been taken as a sine at initial moment of time. The moment of the wave's inversion grows with increasing of the value of the external magnetic field. Thereby external magnetic field stabilizes the Riemann wave increasing the time of its inversion.
Dept. of Physics, Ryerson University, Toronto, Canada
ABSTRACT
In this work, a new 3D numerical model to simulate nonlinear propagation of continuous wave ultrasound beams in homogeneous dissipative media is presented. The model implements a second-order operator splitting method in which the effects of diffraction, nonlinearity and attenuation are propagated in sequence over incremental steps. It makes use of an arbitrary 3D source geometry definition and a non axi-symmetric propagation scheme, which leads to a full 3D solution to the resulting nonlinear field. The diffraction sub-step is accomplished by making use of an angular spectrum approach coupled with an enhanced formula to calculate the acoustic pressure in non-planar fields without using the standard linear relationship between pressure and particle velocity. Comparisons with other numerical models (both linear and nonlinear) as well as experimental data show good agreements. The proposed model is a particularly useful tool in carrying out accurate and efficient simulations of high intensity focused ultrasound (HIFU) beams in tissue where the effects of nonlinearity, diffraction, and attenuation are significant.
Department of Mechanical Engineering, McGill University, Quebec, Canada
ABSTRACT
The flow field in an acoustic standing wave tube was measured using time-resolved particle image velocimetry (PIV). Verifications were made through comparisons between measured and predicted acoustic particle velocities in the spa-tial domain and the time domain. The accuracy of the time-resolved PIV system was satisfactory, at least for the peri-odic flow velocity component. The steady streaming flow field was then obtained through synchronous data acquisi-tion. The streaming flow featured recirculation patterns which were different from classical Rayleigh or Schilchting streaming patterns. One possible reason is that the streaming Reynolds number was too low for classical streaming to occur.
Tohoku University, Sendai, Japan
ABSTRACT
In atomic power plant, stress corrosion cracks (SCCs) have been observed in even metals with high corrosion resistance. Although it is needed to evaluate crack depths with high accuracy, there is a concern that cracks were overlooked or underestimated by Non-Destructive Testing. One of the reasons is that SCCs formed in the water at high temperature and high pressure were closed by oxide films. To solve this problem, we focused on subharmonic waves1) generated by the interaction of large-amplitude ultrasound with closed cracks and developed a novel imaging method SPACE (Subharmonic Phased Array for Crack Evaluation). SPACE can image open and closed parts of cracks as a fundamental image (FA) and subharmonic image (SA), respectively. In early studies, we demonstrated the performance such as in SCC in SUS304 base material, but the comparison between different SCCs had not been made yet.
In this study, we evaluated the open-close behavior and crack depths of SCCs formed in different conditions by coherent measurement using SPACE and linear phased array (PA). the objects were (A): SCC introduced from a notch in Inconel600 weld metal in Tetrathionate (B): SCC obliquely extended from a fatigue crack tip in SUS304 base metal in MgCl2. As a result, in (A) we could image similar crack tips in PA and FA, but we could not image cracks in SA. Therefore, the crack tips of (A) were estimated to be open and then it is confirmed by destructive testing. On the other hand, in (B) we could imaged crack tips of equal depth in PA and FA, in addition, we sometimes imaged deeper cracks in SA than in FA. Therefore, some crack tips of (B) were estimated to be closed. It is interesting to note that SCC (A) and (B) were both introduced by accelerated test in chemical solutions, though one was opened and another was partly closed. The specimens are base material and weld meal, thus each metallographic structures differed greatly. However, there have been no report on the difference of closure state between SCC(A) and (B) caused by the difference between metallographic structures. Therefore, it would necessary to consider the difference of stress state at the points of introduction of SCCs. For this purpose it is useful to evaluate various SCCs in the same material by SPACE and to compare and discuss the crack closure behavior in detail.
Department for Non-Destructive Testing, Institute for Polymer Technology, University of Stuttgart, Stuttgart, Germany
ABSTRACT
Nonlinear effects in air, which are regarded as fundamentals of classical nonlinear acoustics, were experimentally investigated solely in a sonic frequency range. The present study extends the results into the ultrasonic frequency range of hundreds kHz. An acousto-optic technique of air-coupled vibrometry (ACV) has been adapted for imaging and non-invasive quantitative probing of nonlinear airborne ultrasound. At fundamental frequency of 200 kHz, the Mach number is shown to rise up to M>0.001 even for ultrasonic beams used in commercial equipment that makes the high frequency airborne ultrasound strongly nonlinear. The experiments confirm that such beams are affected by nonlinear attenuation and intense second harmonic generation. The experimental results comply well with theoretical estimations which account for interplay between acoustic dissipation and nonlinearity.
In non-perfect solid materials, the acoustic nonlinearity develops quite differently from classical lattice nonlinearity due to strongly nonlinear vibrations in the flaw areas. In this study, such non-classical local nonlinearity is shown to be accompanied by the radiation of high-frequency airborne ultrasound (Nonlinear Air-Coupled Emission (NACE). A direct visualization of the NACE in the form of higher-order harmonics and sub-harmonics from damaged areas in solid materials and components by using the ACV is reported. The ACV also quantifies the nonlinear airborne radiation produced by non-classical nonlinearity of planar defects. The imaging technique is effective in defect characterization by identifying their far-field NACE patterns since the directivity of the radiated field is a spatial Fourier transform of the vibration velocity distribution in the source (defect) area. An efficient radiation of airborne higher harmonics enables to apply conventional air-coupled transducers for detecting NACE which is used as a nonlinear "tag" to locate and image the defects.
(1) Donetsk A.A. Galkin Institute of Physics & Engineering of NASU, Ukraine (2) V.A. Kotelnikov Institute of Radioengineering & Electronics of RAS, Moscow, Russia
ABSTRACT
At the present time the search of acoustic analogies of extraordinary electromagnetic properties of metamaterials (such as superlenses, cloaking, negative refraction, double negative medium etc.) is the main direction of the composite medium of modern physical acoustics. However, despite the constantly growing number of publications devoted to this theme, all theoretical and experimental works known until now were associated exclusively with non magnetic acoustic metamaterials.
The aim of this report is the theoretical studying of the possibility of resonance amplification of SH evanescent acoustic wave by means of 2D magnetic acoustic metamaterials slab. As an example of 2D magnetic acoustic material we consider the two-component acoustically continuous structure representing an elastically isotropic nonmagnetic solid matrix in which there is a set of infinite ferro- or antiferrromagnetic rods of circular cross section with a metal covering. In the frame of effective medium approximation the necessary conditions, under which for acoustically continuous structure from2D magnetic acoustic material slab and elastically isotropic nonmagnetic layer the incident shear elastic wave (volume or evanescent) reflection coefficient is equal to zero, is determined. The anomalies found in this work in the propagation of the shear elastic wave through a layered acoustically continuous structure containing a layer of a composite magnetic material represent an acoustic analogue of the effect of amplification of photon tunneling by a layer of the uniaxial anisotropic left medium.
(1) Seikei University, Tokyo, Japan (2) Kobe Steel Ltd., Kobe, Japan
ABSTRACT
As a new sound absorption material, micro-perforated aluminum thin plate has been developed, which is strong for water, oil, or heat. But thin plate is easily vibrated by sound pressure. And the vibration affects the performance of sound absorption. We experimented to make clear the relation between the coefficient of sound absorption and the vibration of micro-perforated plate. Natural frequencies and vibration modes of micro-perforated thin aluminum plate were observed by using the scanning laser Dopper vibrometer, and the sound absorption coefficient of that plate was measured by two microphone method. We found that the sound absorption performance was affected by natural vibration modes and that there was the special mode to decrease sound absorption performance remarkably when the phase of particle velocity of air and vibration velocity of the plate became same. We also found that damping is effective to improve the local depression of the sound absorption coefficient.
Acoustics Department, TNO Science and Industry, Delft, The Netherlands
ABSTRACT
In the densely populated area of the Netherlands, the objective of the Netherlands Ministry of Defence is to find an optimal balance between military training and the impact on the surrounding civilian community. A special case concerns large weapons, such as armor, artillery or demolitions, which create high-energy blast waves. These waves have a low frequency content, typically between 15 and 125 Hz, and can propagate over large distances. As a result it is a relative important cause for annoyance. By using a dedicated model for military training facilities, rating sound levels around the facility can be calculated for different training situations and the effect of measures can be determined. This model uses a linear sound propagation and an equivalent linear source strength. The source is measured at a sufficiently large distance, between 100 and 200m, where the sound propagation has become linear. As a consequence the effect of the ground and the meteorology is also measured and one has to correct for these effects. A more efficient approach has been tested, where the measurements are done close to the source, at typically less than 10 meters distance. The linear source strength is then calculated by applying a non-linear propagation model. The result are compared to the conventional measurement method. Another advantage of applying the non-linear model, and the nonlinear source strength, is that the effect of mitigation measures close to the source can be determined.
(1) School of Mechanical Engineering, The University of Adelaide, Australia (2) School of Mechanical and Mining Engineering, The University of Queensland, Australia
ABSTRACT
The generation of aeolian tones by the interaction of a low Reynolds number, low Mach numbers flow with a rigid square cylinder attached with a rigid thin flat plate is numerically investigated. When the length of the plate is varied from L = 0.5D to 6D, where D is the side length of the square cylinder, the results can be grouped into three distinct regimes. For the first flow regime (L lesssim D), the aoelian tone levels decreases with increasing of the plate length. For the second regime (2D lesssim L lesssim 4D), the aeolian tone levels are always higher than the single square cylinder case and they increase with increasing of the plate length. For the third regime (5D lesssim L lesssim 6D), the levels of the aeolian tone decrease as the length of the plate is increases but the levels are higher than other regimes. These acoustic results can be explained in terms of fluid mechanics occurring in the near wake of the cylinder.
School of Mechanical Engineering, The University of Adelaide, SA, Australia
ABSTRACT
The use of aeroacoustic beamforming has increased dramatically in the past decade. The primary driving force behind this has been the need to improve the noise characteristics of aircraft and automotive vehicles, coupled with ever increasing computer processing power. Aeroacoustic beamforming is an experimental technique that uses an array of microphones located in the far field of acoustic noise sources generated by a body in air flow. Each microphone measures an acoustic magnitude and relative phase based on its unique position with respect to the acoustic source(s). Beamforming algorithms process this data, typically to generate spatial noise source plots over a two dimensional grid at each frequency of interest. Much of the available aeroacoustic beamforming literature presents results at relatively high frequencies corresponding to large facilities, scale models, and available budgets, which can potentially set unrealistic goals for the development of a small-scale university research facility. This paper details the design and calibration of a small aeroacoustic beamformer, designed to investigate airfoil trailing edge noise for low to moderate Reynolds number flows. The optimisation of the microphone array, based on spatial, air flow and financial constraints, is presented. The algorithms which were used to calculate the beamformer outputs are described, as well as the array calibration process, including beamforming of various noise sources in an anechoic environment. The array is shown to successfully detect and accurately locate both tonal and broadband noise sources.
(1) Laboratoire de Mécanique des Fluides et d'Acoustique, Ecole Centrale de Lyon, Ecully, France (2) Société Nationale des Chemins de Fer, Paris, France
ABSTRACT
Outdoor sound propagation involves many complex phenomena such as interactions between the acoustic waves with local wind and temperature fluctuations in the atmospheric boundary layer or terrain effects due to impedance ground and topography. Moreover, in the context of transportation noise, acoustic sources are usually broadband and in motion. Time-domain numerical solutions of the linearized Euler equations (LEE) are well suited to study broadband noise propagation outdoors, since they can take into account the interactions of the acoustic waves with local wind and temperature fluctuations in the atmospheric boundary layer. The motion of the acoustic sources can also be considered with this type of simulations, which can be useful in the context of transportation noise. Finite-difference time-domain methods are thus becoming increasingly popular in the outdoor sound propagation community. One of the main difficulties is to account for the reflection of acoustic waves over an impedance ground. A time-domain boundary condition has been recently proposed and has been implemented in a finite-difference time-domain solver using methods developed for computational aeroacoustics. We will first considered the propagation of an initial pulse over a distance of 100 m in a three-dimensional geometry in a frequency band up to 600 Hz. Surface waves which propagate close to and parallel to impedance grounds are exhibited. The numerical results are compared in time-domain with an analytical solution. The tails of the pressure signals are well predicted by the surface wave. Then a long range configuration in 2D geometry is also investigated in homogeneous conditions and in downward-refracting conditions with an impedance of a grassy ground and of a snow ground. Numerical results are compared in time domain to an analytical solution for homogeneous conditions and to a ray-tracing code for downward-refracting conditions. Near the ground, surfaces waves are the dominant arrivals in the two cases.
(1) School of Mechanical and Manufacturing Engineering, The University of New South Wales, Sydney, Australia (2) Maritime Platforms Division, Defence Science and Technology Organisation, Melbourne, Australia (3) Institute of Mechanics, Universität der Bundeswehr München, Neubiberg, Germany
ABSTRACT
A computational approach is proposed to extract the acoustic sources generated by low Mach number flow past a circular cylinder and to predict the associated far-field acoustic pressure. The transient hydrodynamic flow field is calculated using an incompressible computational fluid dynamics (CFD) solver. The acoustic sources are extracted from the hydrodynamic flow field based on the linearised perturbed compressible equations (LPCE). These acoustic sources are combined with a boundary element method (BEM) model of a rigid circular cylinder and the far field sound pressure level is predicted. The results from this hybrid CFD/BEM approach are presented for flow past a circular cylinder with Reynolds number, ReD=100 and Mach number, M=0.15. The directivity of the radiated sound pressure field at the vortex shedding frequency agrees well with results of alternate methods available in the literature.
School of Mechanical Engineering, University of Adelaide, SA, Australia
ABSTRACT
The efficient computation of turbulent airfoil trailing edge noise is important for the cost-effective design of fixed and rotary-wing aircraft, wind turbines, fans and submarines. Recently, the computation of trailing edge noise has mainly been attempted using either direct or hybrid methods of computational aeroacoustics (CAA). However, many of these approaches rely on expensive transient flow solution methods for acoustic source term calculation, such as direct numerical simulation (DNS) or large eddy simulation (LES), which aren't appropriate for engineering design purposes. This paper will present a new approach for calculating turbulent trailing edge noise. Instead of using DNS or LES for a flow solution, the method uses mean flow solutions (Reynolds Averaged Navier Stokes or RANS) and a statistical model to calculate acoustic source terms and radiated far-field noise. After the method is presented, results showing the noise generated by the passage of turbulent flow past a sharp edged flat plate will be shown. For the purposes of validation, the model will use mean flow data from both DNS and RANS solutions to calculate the acoustic source terms. Simulated noise will then be compared with an empirical model of flat-plate trailing edge noise. The paper will conclude with remarks on the accuracy of the method and a discussion of future test cases required to test its validity in more challenging flow conditions.
Pusan National University, Korea
ABSTRACT
In this paper, low-noise centrifugal fans are developed by applying a new design concept which can reduce the airfoil-self noise by inducing phase differences of potential sources on trailing edge lines of fan blades in the span-wise direction. These design concepts are realized by modifying existing linear trailing edge lines of fan blades into the inclined S-shaped trailing edge lines. First, the validity of low-design concepts are confirmed by the experiments carried out with four prototype fans. These results show that noise reductions of approximately 2 to 3.5 dB are achieved for the new fans in comparison with the original fan. These reductions are retained over the range of rotation speed of fans from 1800 rpm to 2400 rpm. The detailed comparison of sound pressure spectrums between the new fans and original fan reveals that these reductions are mainly due to broadband noise reduction but not BPF components. To analyze the detailed mechanisms of noise reduction of newly developed inclined S-shaped fans, further analysis is made by using hybrid computational aeroacoustic techniques where the computational fluid dynamics (CFD), the acoustic analogy, and the boundary method (BEM) are sequentially used. The validity of numerical results is confirmed by comparing the predicted BPF noise components with the measurement. It is found that the turbulence kinetic energy of the fluid, predicted for the inclined S-shaped fans, is less than those for the existing fan. This implies that the main mechanism for the nose reduction of newly developed fans is due to the decreased turbulence energy considered as a qualitative index for the source magnitude of broadband self-noise.
The Marcus Wallenberg Laboratory, Royal Institute of Technology, Stockholm, Sweden
ABSTRACT
Junctions and cavities are common elements in flow ducts such as automotive intake and exhaust systems, ventilation systems or pipelines. The aeroacoustic response of such elements is strongly influenced by the mean flow configuration in the system. The fluid-acoustic interaction is in low Mach number applications often described as the continuous interaction of hydrodynamic instabilities with the acoustic field as they are convected across the aperture. The interaction can be constructive or deconstructive, that is, both attenuation and amplification of incident sound is possible. At low amplification rates the system is still linear; however if the amplification rate is too high, the interaction becomes nonlinear leading to a self-sustained oscillation. This can lead to intense noise and even mechanical failure. The frequencies where a system potentially can sustain an oscillation can be predicted from analysis of the linear system since the frequency at which it occurs is given by the convection of the hydrodynamic instabilities across the aperture (which is not influenced by the vorticity strength). Hence, the interaction between the hydrodynamic and acoustic field collapse well with a Strouhal number based on the frequency of the incident sound and the convection velocity of the hydrodynamic disturbance. A well defined case is grazing flow past an orifice, here the characteristic length is easily defined (simply the effective length of the aperture) and the convection velocity is around half the mean flow velocity. Other flow configurations are not as obvious to define. An example is studied here; a T-junction is subjected simultaneously to grazing and bias flow, and hence, both the effective length the vorticity travels across the aperture as well as the convection velocity will change. The purpose of this work is to understand and quantify the influence on the collapse Strouhal number of grazing-bias flow.
The method is mainly experimental and involves detailed measurement on a T-junction of rectangular cross-section. The T-junction is seen as a linear acoustic three-port from which quantities of interest can be derived. The three-port is determined via the two-microphone wave decomposition method using the source switching technique. Since the whole analysis assumes a linear system the excitation of the system (here by loudspeakers) must not be too high, also any resonant system should be avoided. Hence, each branch of the three-port is terminated with a large resistive silencer.
Markus Wallenberg Laboratory for Sound and Vibration Research, Stockholm, Sweden
ABSTRACT
Modelling of the acoustic properties and especially the influence and interaction with mean flow in ducts is a challenge. Often the problem is reduced assuming that the system under study can be broken down into a network of linear multiports. These multiports are then characterised individually either analytically or by experiments or numerical simulations.
In control theory methods for assessing the stability of this type of networks of multiports are widely used. Applying the Nyquist stability criteria frequencies where the system can become unstable at a certain gain is identified. In this work the Nyquist stability criteria will be applied to detect frequencies where self sustained oscillation can occur in a flow duct system. The test case is a side branch orifice, realised as a T-Junction, which is subjected to grazing flow. Hydrodynamic instabilities in the shear layer interact with the acoustic field while being convected across the orifice. When the acoustic period match the travel time of the hydrodynamic instabilities incident sound can be amplified. If the amplification rate is sufficiently high, as it would be if a resonant system is present, the response become non linear resulting in a self-sustained oscillation. First the T-junction is characterised experimentally and presented as a linear acoustic three-port. This three-port is then connected to other linear elements to form a simple network. Finally the stability analysis is applied to the complete system matrix. It is shown that providing a resonant system with the appropriate characteristics to match the fluid-acoustic interaction at the orifice the system is unstable. It is also possible to find the amount of damping needed to make the system stable again. The results are of great practical use for anyone involved in designing flow duct systems. Being able to predict a non linear phenomenon such as self sustained oscillations by simple linear models is a most effective engineering tool.
MWL Sound and Vibrations, Linne FLOW Centre, KTH, Royal Institute of Technology, Stockholm, Sweden
ABSTRACT
We present an efficient methodology to perform calculations of acoustic propagation and scattering by geometrical objects in ducts with flows. In this paper a methodology with a linearized Navier-Stokes equations solver in frequency domain is evaluated on a two-dimensional geometry of an in-duct area expansion. The Navier-Stokes equations are linearized around a time- independent mean flow that is obtained from an incompressible Reynolds Averaged Navier-Stokes solver which uses a k-e turbulence model. A plane wave decomposition method based on acoustic pressure and velocity is used to extract the up- and downstream propagating waves. The scattering of the acoustic waves by the induct area expansion is calculated and compared to experiments. Frequencies in the plane wave range up to the cut-on frequency of the first higher order propagating acoustical mode are considered. The acoustical properties of the area expansion is presented in a scattering matrix form that can be used in acoustical two-port calculations on complex duct systems such as exhaust system mufflers and ventilation systems.
(1) Laboratoire Central des Ponts et Chaussées, France (2) Électricité de France R&D, France (3) Laboratoire Mathématiques Appliquées aux Systèmes, Ecole Centrale Paris, France
ABSTRACT
Intrinsic variability due to micrometeorological effects and/or ground effects, measurement uncertainty and model uncertainty are the main sources of spreading of the parameters influencing outdoor sound propagation. Thus spreading associated to outdoor SPL is a complex combination of deterministic, stochastic and epistemic uncertainties, and can be quantified thanks to a probabilistic process. This statistical process is presented in this paper and is called Calibration Under Uncertainty (CUU). Quantitative uncertainty assessment involves a pre-existing physical system to be studied, input data which can be measured or derived from measurements, and a sufficient amount of available (experimental and/or numerical) data with an eventual human expertise. CUU couples information from experimental and modelled data taking into account their own uncertainties (measurements errors, lack of knowledge on physical behavior, etc.) under specific assumptions. Quantify the global uncertainty on SPL, rank or apportion the contributions of influent parameters to a given output quantity of interest, compare experimental and effective parameters, and more generally understand the whole input-output structure are the main tasks of such a statistical method. CUU process has been applied to more or less complex cases using a large experimental set of data (Lannemezan 2005 (F)). An application to near ground sound propagation has been first led to understand the relative influence of ground parameters. A more complex case considering large distances and including micrometeorological effects has also been fulfilled with promising results which are presented in this paper.
University of Sydney, NSW, Australia
ABSTRACT
The acoustic signature of unmanned aerial vehicles (UAVs) is one of the limiting factors facing the expanding use of these platforms for both civil and military uses. The overall propeller noise signature can be reduced by firstly reducing the motor noise and the blade passage noise, which is a result of the propellers rotational speed, diameter and shape. However, once these are optimised only modifications to the propeller self noise will help to further reduce the platforms noise signature. This investigation presents one method that will reduce the propeller self noise through tripping the boundary layer on a small propeller (diameter ~250mm) with a short chord length (15~30mm) with blades operating at low Reynolds numbers. Laminar separation bubbles commonly occur on propellers of this size as a result of the low Reynolds number conditions existing on blades. Experiments have shown that boundary layer tripping not only reduces that drag of the blade, but when a laminar separation bubble on the suction surface of the propeller blade is eliminated a noise reduction occurs as well. The reasons for this noise reduction were not initially clear, and so its characteristics were examined experimentally on a rotating propeller in both static and wind tunnel conditions. These experiments have helped to show that a number of aerofoil noise mechanisms are at work simultaneously, and do not necessarily occur as the simple turbulent or laminar boundary layer noise models as traditionally believed. Analyse of the spectral peaks has exhibited characteristics of laminar boundary layer noise, even with the presents of a laminar separation bubble which would promote boundary layer transition to occur on the blade surface. Comparisons with literature models such as the semi-empirical aerofoil self noise model of Brooks, Pope, et al (1989) have also shown agreement with laminar boundary layer noise characteristics.
The leading edge trip has proved successful in achieving a broadband reduction in simulated operational conditions which resulted in up to a ~6dBA SPL repeatable noise reduction at the sample location, but has not yet been successful in fly over tests. It is hypothesised that the laminar separation bubble is the most likely amplification source for the Tollmien-Schlichting instability waves, which then reach sufficient amplitude to be radiated as noise from the trailing edge. The elimination of the laminar separation bubble removes the strong laminar boundary layer noise source and also reduces the noise generated by the turbulent boundary layer.
Tokyo University of Agriculture & Technology, Tokyo, Japan
ABSTRACT
A monitoring system for a gust of wind like a tornado is desired such as in railroads or airports. It is not realistic to use an anemometer for this purpose because ordinary anemometers are of fixed observation in situ and large numbers must be needed. In order to encounter the problem, acoustic line array elements were placed along the facing sides of the monitoring region. From the remote observation of the travel time along the multiple propagation path between the facing line elements, two-dimensional vortex air flow profile was reconstructed based on the Fourier central slice theorem valid for the vector vortex air flow fields. The previous method by the present authors was extended to cover the inclining vortex wind field including the vertical axial air flow component. To this end, the target horizontal vortex components were discriminated from the axial flow components using the symmetrical property of the travel time characteristics over the observation line. As a indoor experiment system, 10 pairs of ultrasound transmitter/receiver were arrayed on a facing sides of the measurement region of 36cm x 36cm. Vortex wind fields from the electric fan (with diameter 190 mm) were reconstructed under the various wind source conditions. The results were demonstrated that precisions of the estimated vortex parameters (maximum vortex flow speed, size and position of the vortex wind field) were satisfactory which justifies the feasibility of the present method.
School of Mechanical Engineering, The University of Adelaide, Adelaide, SA, 5005 Australia
ABSTRACT
This paper presents an acoustic analysis of the noise generated at the trailing edge of a flat plate encountering low turbulence fluid flow. Experimental measurements were taken in an anechoic wind tunnel using four microphones: one mounted above the trailing edge, one below the trailing edge, one adjacent to the trailing edge and one above the leading edge. The noise spectra produced by the flat plate were recorded at the four microphone locations. Information about the strength and directivity of the trailing edge noise is determined by comparing the four signals. Subtracting the out-of-phase signals at the microphones above and below the trailing edge is shown to increase the airfoil self-noise spectra further above that of the ambient noise and is shown to be an effective signal extraction technique.
Faculty of Engineering, Niigata University, Japan
ABSTRACT
Rapidly growing recent demands in accurate simulations in the processes of acoustic designs involving outdoor acoustic environments, such as road traffic noise barriers, have urged developments and applications of more advanced models that can deal with characteristics of such outdoor environments. The linearized Euler model is known to be one of the most accurate models for such advanced simulations in that the model can take into account the effects of nonuniform and even unsteady turbulent background flows and temperature gradients which supposedly have large influences to typical outdoor acoustic propagation situations. The model has traditionally been implemented using finite-difference numerics under structured grids thanks to its compatibility with higher-order schemes. However, for real world urban complex geometries such as urban city blocks it may make sense to apply finite-volume technique which in general is computationally more heavy but can handle full unstructured grids. In the present study, linearized Euler implementations based on the traditional second- and higher-order finite-difference techniques and the new unstructured finite-volume technique are compared in terms of errors from theoretical solutions and computational costs. A modified version of one of the benchmark problems laid down by the NASA/LaRC CAA workshop is used as the testcase. The accuracy of the results by the finite-volume technique turned out to match those by finite-difference techniques with slight lags, however with 20 - 300 times higher processor and memory usages.
Département Fluides, Thermique, Combustion, Institut Pprime CNRS, Université de Poitiers, Poitiers, France
ABSTRACT
The time-reversal (TR) technique has been extensively developed over the two last decades, but very few applications
have concerned the field of aeroacoustics. The possibility of using the TR technique in the context of wind-tunnel
measurements is then investigated in this study, in order to localize a sound source in a flow. The chosen strategy is
the following: in a first experimental step, the pressure fluctuations are recorded in the far field over a linear array of
microphones, located outside the flow; in a second simulation step, the experimental signals are time-reversed and used
as input data. The time-reversed linearized Euler equations are then solved numerically in order to model the sound
propagation through the shear layer and the flow. The back-propagated pressure field is then investigated, both in terms
of energy and phase. Some preliminary simulations show that it is possible to localize a monopolar source located in a
flow by using this method. The experimental results at Mach number 0.12 show that a monopolar source at 5 kHz can
be satisfactorily located, with an error of the order of half-the acoustic wavelength. Some measurements concerning
a dipolar source are also presented: the effects of the flow on the radiation appear clearly on the data, and the source
position is estimated with an error of the order of the acoustic wavelength.
(1) Madras Institute of Technology, Anna Uuniversity, Chennai, India (2) Aeronautical Development Establishment, Bangalore, India (3) Division of Avionics, Madras Institute of Technology, Anna Uuniversity, Chennai, India
ABSTRACT
Acoustic surveillance of the battlefield enables the detection, classification, localization and tracking of sound sources of military interest including ground vehicles, air vehicles etc. An application of current interest is the detection and localization of sources on battlefield using acoustic vector sensors (AVS) onboard unmanned aerial vehicle. The acoustic self-noise environment onboard unmanned aerial vehicle which is dominated by propulsion engine noise with air flow noise contributing to a lesser extent.
By applying suitable signal processing and pattern recognition methods, it is shown that an unmanned aerial vehicle can provide an effective platform for locating sources on battlefield. There have been onboard sensors like EO-systems incorporated on UAV. However the performance is limited by Field of View (Fov), Terrain condition, foliage vegetation, Day/Night etc. Hence acoustic modality is increasingly being considered to locate acoustic events like gun shots, movement of tankers, trucks, sniper, armoured vehicle activity on ground, other aircrafts and also cueing onboard EO payloads towards target etc.
The Doppler-shifted frequency time histories derived from spectrogram contours and Lloyd's mirror effect interference pattern in time-frequency distribution of the output of an Acoustic vector sensor positioned above the ground & onboard is used in this approach for estimating parameters. Acoustic intensity measured using an AVS in three orthogonal directions at a point is a powerful quantity that can be used to estimate the source bearing with simple computations. Methods, with examples, for extracting tactical information from acoustic signals emitted by continuous & transient acoustic events are provided for both acoustic vector sensor on ground & onboard. Besides this, the dynamic parameters like (velocity, direction of motion, height and distance to closest point of approach - cpa) can be estimated under certain conditions. The considerations presented in this paper are confined to time-frequency analysis of the radiated noise.
(1) Physics Laboratories, Kyushu Institute of Technology, Iizuka, Fukuoka, Japan (2) Research Institute for Information Technology, Kyushu University, Higashi-ku, Fukuoka, Japan
ABSTRACT
Edge tones are acoustic fluctuations generated by the oscillation of a jet emanating from flue and collided with an edge. The study of edge tones has a long history and many authors have contributed to this problem. It is considered that some feedback mechanism, fluid and/or acoustic feedback, sustains the jet oscillation whose frequency mainly determines the frequency of the edge tone emitted by aerodynamic sound sources, so-called Lighthill's source. However, the detail mechanism of the edge tone is still not understood completely.
The aim of our study is to specify positions of the sound sources and to clarify how they are created in turbulence and how the sound is emitted from them, in terms of the aerodynamic sound theory. For the first step, we numerically reproduce the jet oscillation as a sound source and the edge tones as a product simultaneously for 2D and 3D models with compressible Large-eddy Simulations. In previous work we succeeded in reproducing sound vibrations of 2D and 3D air-reed instruments with a numerical scheme provided as a free software, OpenFOAM.
In this paper, we concentrate ourselves on a simple case of a symmetric edge without a resonator and calculate edge tones for 2D and 3D models with changing jet velocity. Lighthill's sound sources are obtained numerically and their behavior is analyzed in statistical methods. Actually mutual correlations among the sound source and the sound field are calculated so as to examine details of interaction among them. With those results, we try to specify the most dominant area of sound sources distributing around the jet and the eddies behind the edge which are generated by collision of the jet with the edge.
We also compare Lighthill's sound source with the sound source of the vortex sound theory formulated by Howe. In the vortex sound theory, the sound wave is considered as propagation of fluctuation of the total enthalpy instead of the air pressure or air density. Thus, the formulae are different and so are the source terms. We will clarify the difference of source distribution between Lighthill's and Howe's formulae and will discuss why such a difference occurs.
Department of Applied Physics, Eindhoven University of Technology, Eindhoven, The Netherlands.
ABSTRACT
Flow-induced pulsations in resonant pipe systems with two closed side branches in cross configuration are considered. These pulsations, commonly observed in many technical applications, are self-sustained aeroacoustic oscillations driven by the instability of the flow along the closed branches. Detuning of the acoustic resonator is often considered as a possible remedial measure. Although this countermeasure appears to be very effective for double side branch systems in cross configuration with anechoic boundary conditions of the main pipe, its effectiveness has not been assessed for different boundary conditions. The significance of the acoustic boundary conditions of the main pipe has been studied by means of experiments conduced on double side branch systems presenting two acoustically reflecting boundaries of the main pipe. While pulsations are often a nuisance, the double side branch system can also be used as a powerful sound source.
Defence Science and Technology Organisation (DSTO), Australia
ABSTRACT
The derivation of Curle's equation for the sound radiated by a flow near a rigid surface is reconsidered. It is shown that this equation and the non-uniform Kirchhoff equation previously derived by the author are equivalent if the sum of two integrals containing Lighthill's stress tensor over the rigid surface is zero. These two integrals are equivalent to the acoustic field radiated by sources determined by Lighthill's stress tensor and its spatial derivatives on the boundary. This leads to an immediate result that the two equations are equivalent if Lighthill's stress tensor vanishes altogether, for instance, for linear acoustical waves in ideal fluid. The obtained criterion is applied to a flow near an infinite rigid plane in a fluid. Two cases are considered: first, a weakly non-linear flow (with low Mach number) in an ideal fluid and, second, a linear flow in a viscous fluid. It is shown that, in a weakly non-linear flow, the equations are equivalent if the plane is stationary, and, if the plane is vibrating, the two integrals are proportional to the value of the plane normal velocity squared and, therefore, the difference between the predictions for the radiated sound by both equations is non-zero in general. It is also shown that, in a viscous linear flow, the difference between the two predictions is also in general non-zero. It is concluded that, although the two equations are different, they lead to equivalent or close predictions in a number of situations. The question of the equivalence of the two equations for flows with large Mach numbers requires further investigation.
NURC, France
ABSTRACT
The inversion of sea bottom properties and in particular the knowledge of sound speed in the seabed is essential information for the prediction of sonar performances. In shallow water, the surface roughness (sea-state) can be a factor of error for inversion procedures. The aim of this paper is to assess, in simple cases, the effects of the sea surface scattering phenomena on inversion procedures. The paper presents results of simulation achieved for various sea states. The inversion was achieved ignoring the sea state in order to estimate its robustness to sea-state variations. Simulations were performed using conventional normal-mode model (ORCA). Under small roughness approximation, the sea-state was introduced using modal attenuation coefficients and takes into account sea-state influence on forward scattering. For a given environment and geometry, the acoustic field was computed for a fixed sea-state (reference field) and the inversion was achieved for other states. Sound speed in the sediment was recovered by a conventional inversion method based on Bartlett operator. Numerous simulations were realized for various values of frequency (50-750 Hz), water depth (50-200 m), sea-states (1-6) and sediment types (1570-1970 m/s). The estimation errors vary with all the relative values of the parameters and can reach values as higher than 400 m/s in the worst cases.
School of Mechanical and Manufacturing Engineering University of New South Wales (UNSW), Sydney, NSW 2052, Australia
ABSTRACT
This work investigates the use of inertial actuators to actively reduce the sound radiated by a submarine hull under harmonic excitation from the propeller. The axial fluctuating forces from the propeller are tonal at the blade passing frequency. The hull is modelled as a fluid loaded cylindrical shell with ring stiffeners and two equally spaced bulkheads. The cylinder is closed by end plates and conical end caps. The forces from the propeller are transmitted to the hull by a rigid foundation connected to the shaft with a thrust bearing. The actuators are arranged in circumferential arrays and attached to the internal end plates of the hull. Two active control techniques corresponding to active vibration control and active structural acoustic control are implemented to attenuate the structural and acoustic responses of the submarine. An acoustic transfer function is defined to estimate the far field sound pressure from a single point measurement on the hull. The inertial actuators are shown to provide control forces with a magnitude large enough to reduce the structure-borne sound due to hull vibration.
ARL, Tropical Marine Science Institute and Department of Electrical & Computer Engineering, National University of Singapore
ABSTRACT
Snapping shrimp dominate the high frequency soundscape in shallow warm waters. The noises produced by these small creatures are a result of the collapse of cavitation bubbles they produce. During the rapid collapse, the temperatures in the bubble can momentarily reach the surface temperature of the sun, and produce impulsive noise with source levels higher than 190 dB re 1 uPa @ 1m. With millions of snapping shrimp in most warm shallow water environments, the resulting cacophony is heard in the form of a background crackle familiar to many tropical divers. The resulting ambient noise has highly non-Gaussian statistics. What implications does this have on acoustic sensing in these environments? Can signal processing techniques developed with Gaussian noise assumptions be used without significant penalty in these environments? Can these shrimp be used as sources of opportunity for sensing? To begin answering some of these questions, we present a review of some of the research on signal processing in impulsive noise. Snapping shrimp noise is modeled accurately by symmetric alpha-stable distributions. Optimal signal processing in alpha-stable noise is often computationally infeasible, but computationally simple near-optimal solutions can be applied with gains up to 5-10 dB. Communicating in environments with snapping shrimp noise has its own challenges. The errors due to the impulsive noise on sub-carriers of a multi-carrier communication system, or the in-phase and quadrature channels of a single carrier system are not independent. If handled inappropriately, forward error correction codes can perform poorly in such systems. However, if the dependence in the errors can be characterized, it can be exploited in the decoding process to get substantial communication performance gains. We show this through an information theoretic analysis of the communication channel with additive symmetric alpha-stable noise. Finally, we turn to some applications where the snapping shrimp sounds can be used as sources of opportunity. They can serve as "illumination" for ambient noise imaging, where underwater objects can be imaged completely passively. They can also be used as sources for geoacoustic inversion of the surface sediment. We present some results from past experiments to show how sediment sound speed can indeed be inferred by simply listening passively to the cacophony of the shrimp.
Scripps Institution of Oceanography, UCSD, La Jolla, USA
ABSTRACT
The origin of underwater noise from breaking waves above a few hundred hertz is thought to be due to the pulses of sound radiated by newly-formed bubbles. A simple model based on bubble physics shows that breaking wave noise depends on bubble creation rate, the mechanism(s) of bubble acoustical excitation and sound scattering and absorption by the plume of bubbles entrained by the wave. Model calculations of the noise based on estimates of these factors is compared with measurements made in a laboratory flume of focused, breaking seawater wave packets. The model calculations are in good agreement with experimental results once reverberation in the flume is accounted for. A closed-form, analytical expression for the noise from an individual wave event can be obtained from the mode. The power-law scaling of noise level on frequency predicted by the model depends on three factors: a factor of 3/2 from the bubble creation rate, a factor of -2 from the shape of the bubble pulse and a factor of -4/3 from bubble damping, determined by thermal and radiation losses. The combined scaling of 3/2-2-4/3=-11/6 is in good agreements with the -10/6 dependence observed from the Wenz spectra. Elements of the model and its implications for calculation of noise in the open ocean will be discussed.
QinetiQ North America, VA, USA
ABSTRACT
Antisubmarine Warfare (ASW) is often conducted in littoral, shallow-water areas, where hostile subsurface enemies pose a constant threat—and where the seabed geophysical properties are complicated, and to a great extent unknown to us. Accurate estimates of seabed interface roughness and sediment geophysical properties are critical for proper prediction of sensor and weapon system performance. In the absence of good seabed characterization, tactical mission planning is seldom optimal or efficient. Current data collection survey techniques for geo-acoustic bottom characteristics are expensive, time consuming, and they suffer from time latency (months to years) between collection, processing, analysis, and tactical use. In response to this problem, the U.S. Navy has investigated several new inversion techniques to characterize littoral seabed sediments. Most of these techniques use an active sonar approach that is not covert and is usually limited to areas near the receiver. Purely passive techniques offer the potential to remain covert and greatly extend the area of seabed characterization.
The research described here concerns a new set of algorithms called Passive Geo-Acoustic Inversion Techniques (PGAIT) that act on passive acoustic signals from ships of opportunity and allow non-provocative geo-acoustic characterization. PGAIT uses coherent and incoherent matched-field processing on signals from passing ships received on a vertical aperture in shallow water. There is no need to know the source spectrum. Broadband and temporal averaging techniques are used to reduce ambiguities and to increase the output Signal-to-Noise Ratio (SNR). The algorithms are robust to environmental model mismatch and usually produce an output with at least 10 dB SNR, which is sufficient to identify sediment types. A key element of the process is a method to consider a range-dependent environment as a sequence of range-independent slabs. This nuance contributes strongly to the robust nature of the processor. The performance of PGAIT is demonstrated at frequencies between 30 and 50 Hz in several sediment conditions, ranging from very soft to very hard. The results show that: 1) the vertical aperture should contain at least 3 hydrophones per wavelength to ensure high quality inversions; 2) coherent (phase-only) matched-field processing outperforms standard intensity processing by about 2 dB in good input SNR conditions; 3) incorrect assumptions about the assumed sound-speed profile (e.g., a bias or incorrect mixed-layer-depth) do not significantly affect the inversion results; and 4) the new range-independent slab approach is computationally intensive, but it can resolve discrete sediment boundary discontinuities.
Shanghai Marine Electronic Equipment Research Insititute, Shanghai, P.R.China
ABSTRACT
With the rapid progress of the supporting technologies, there is a increasing requirement of the application of Underwater Acoustic Network (UAN) both in commercial and military fields. In the commercial applications, terrorists may carry through destroy activities; while in the military applications, even it is a inevitable activity to degrade the adversary UAN. Thus it can be seen that security measures must be integrated into UAN. In the terrestrial networks, many security measures have been adopted and show increased robustness. But in current studies on UAN, few attempts are pursued during the development. Though there are still some open research issues in this field, mature UANs should own the capability of counterworking against the hostile attacks somehow.
The proposed paper tries to do some efforts in the security of UAN, presenting two secure routing protocols. The paper is consisted of the following main contents. (1) The state of the art of routing protocols of UAN is investigated. It shows that three kinds of protocols are of the dominant: proactive protocols, reactive protocols and geographical ones. (2) A distinct UAN model is established. The network is a typical two-branch-tree structure. One sink node leads two branch nodes; and each branch is divided into two branches again; and at the lowest level, there are eight leaves. Such a configuration is a concentrated network, and multi-hop information interchanging is needed. (3) Two secure protocols are developed based on the aforementioned network for its continue-to-pursue. The first is called PrePro, which is with a preset mechanism. It would automatically re-find the routes after partial destroy. The second is named CastPro, which would re-establish the necessary routes by broadcasting the new messages. (4) Within a simulation scenario, the performances of the two protocols are compared via computer simulation. Some of the characteristics of the network such as network connectivity, average delay, node connectivity, successful transmission rate are selected. At the last of the paper, prospect of some of the open research issues are listed, which will lead to a robust UAN.
Shanghai Marine Electronic Equipment Research Insititute, Shanghai, P.R.China
ABSTRACT
Underwater Acoustic Network (UAN) has been studied for three decades, and now there are a number of experimental implementations developed by many research organizations. It is believed that most of them will bring into various applications in the near future. Recent researches on UAN are mainly focused on the construction and management. Though these studies have covered nearly all the respects within the UAN infrastructure, few efforts have been made for its security, which is surely an important consideration when put into practices.
The proposed paper will focus to the security considerations of UAN. The main contents are as follows. (1) The characteristics of UAN are analyzed. From underwater acoustic communication and networking, main related respects are described. And comparisons between UAN and WSN (Wireless Sensor Network in territory.) and Ad hoc Network are pursued, which result in a more significant necessity for the security of UAN. (2) The application environments of UAN are analyzed. Both civil and military applications are prone to be destroyed, either by the nature or the intended artificial attackers. For example, the severe current and hostile attacks are the two main causations. (3) The goals and challenges of a secure UAN are analyzed. For different utilities, different requirements and difficulties would be confronted. (4) The security threats are analyzed. The study indicates that security problem might happen at all physical and protocol layers. Since the hardware weakness is a secondary problem, emphasis are put to the protocol attacks, which might arouse a large-scale paralysis of UAN. (5) The countermeasures against the attacks are studied. Based on the threat estimations, security measures are analyzed with respect to the adversary offensives. At last of the paper, conclusions are drawn from the above description. It is pointed out that the security considerations are crucial to UAN, and measures on security must be taken into consideration simultaneously.
School of Earth and Ocean Sciences, University of Victoria, Victoria, BC, Canada
ABSTRACT
This paper describes a general Bayesian approach to estimating seabed geoacoustic parameters from measured ocean acoustic fields, which is also applicable to other inverse problems. Within a Bayesian formulation, the complete solution to an inverse problem is given by the posterior probability density (PPD) over the unknown model parameters, which includes both data and prior information. Interpreting the PPD requires computing its properties which provide parameter estimates (e.g., the maximum a posteriori or MAP model which maximizes the PPD), parameter uncertainties (variances, marginal distributions, and credibility intervals), and parameter inter-relationships (correlations and joint marginal distributions). Computing these properties requires either optimizing or integrating the PPD, which must be carried out numerically for nonlinear problems. Here, PPD optimization is carried out using an adaptive hybrid algorithm which combines the local downhill simplex method within a very fast simulated annealing global search. PPD integration is accomplished using the Markov-chain Monte Carlo method of Metropolis-Hastings sampling, rotated into principal components and applying a proposal distribution based on a linearized PPD approximation for efficiency.
In many practical inverse problems both an appropriate model parameterization and the data error distribution are unknown and must be estimated as part of the inversion. These problems are linked, since the resolving power of the data is affected by the data uncertainties. Model selection is carried out by evaluating Bayesian evidence (parameterization likelihood given the data), or a point estimate thereof such as the Bayesian information criterion, which indicates the simplest parameterization consistent with the data. The data error covariance matrix (including off-diagonal terms, as needed) is estimated from residual analysis under the assumption of a simple, physically-reasonable distribution form, such as a Gaussian or Laplace distribution. The validity of these assumptions and estimates is examined a posteriori using both qualitative and quantitative statistical tests on data residuals. The above approach is illustrated by inverting multi-frequency acoustic field data, recorded at a shallow-water test site in the Mediterranean Sea, for a layered seabed geoacoustic model.
Centre for Marine Science and Technology, Curtin University, WA, Australia
ABSTRACT
Many coastal and offshore construction activities require the driving of piles into the seabed, either using impact or vibratory pile drivers. Impact pile driving produces an intense impulsive underwater noise that has been associated with fish deaths at very short range, whereas vibratory pile driving produces a lower level continuous noise. Because of the high sound levels involved, noise from pile driving may have an adverse impact on marine animals, and its characteristics are therefore of considerable interest. This paper presents the results of measurements of underwater noise from pile driving that have been made at a variety of locations around Australia, and presents the results of some attempts to use acoustic propagation modelling to extrapolate these results to other locations.
(1) Kanagawa University, Yokohama, Japan (2) Port and Airport Research Institute, Yokosuka, Japan
ABSTRACT
Acoustic lens has the possibility of improvement of acoustic characteristic of transducer not only in underwater application but also medical probe. In this paper, a small acoustical lens of MHz type transducer is described. A simple 2D-FDTD calculation based on symmetry is proposed in this paper, because the 3D-FDTD of orthogonal coordinates requires large memory and long calculation time to estimate the characteristics of lens. A virtual spherical sound source whose amplitude distribution is equal to the sound propagation field of real sound source is also used for reduction of calculation. Experiments were carried out with plane-concave lens in temperature controlled water bath. Calculated results agree well with experimental results in both cases that the incident surface is plane or concave, respectively.
(1) Polish Naval Academy (2) Gdansk University of Technology, Poland
ABSTRACT
The acoustic methods as a non-invasive and teledetective are very good tools to study the stratification of bottom sediments, particularly useful as a research tool is a parametric sonar, which works together with other acoustic devices used for underwater observation, such as multibeam sonars and side sonars. Using the techniques of synthetic apparatus they can provide relatively a lot of information about the shape and the structure of the seabed. The results of the sounding of the bottom of the Gulf of Gdansk and the Southern Baltic Sea will be presented in the paper. Also there will be shown the pictures of both the seabed bathymetry and the profile of the stratification of the upper sphere of the seabed. These results have been precisely located in the geographical coordinates and compared with the results of geological investigations in the same area. Moreover samples from the surface of the sea bottom in selected areas.
Kingsgrove, NSW, Australia
ABSTRACT
Certain offshore structures require pipelines to the shore, which are generally buried beneath the seafloor in a shallow trench. If the seabed is rock, creating a trench can require blasting, which is done with explosions confined in bore-holes drilled into the rock and covered with stemming. A desalination plant is under construction at Binningup, Western Australia. The seabed contains a sand layer (of variable thickness) over Tamala limestone, and five confined blasts were fired in the limestone to create trenches for an outfall and two inlets. The seafloor depth was 10 m at the blast positions. The underwater acoustic signals were monitored by hydrophones out to sea, and the signals from one of these blasts have been selected for detailed analysis. The data for peak Sound Pressure Level from the confined charges are in good agreement with a synthesis of empirical formulae due to Arons (1954), Gilmanov (1984) and Oriard (2002). The characteristics of the acoustic ground wave were affected by a high-density sub-bottom stone rather than the Tamala limestone layer that lies above it. The interface to the dense stone is around 10 m below the seafloor, and is observed to have a wave speed (presumably shear) of 2620 m/s. The blast contained six explosions, and the Sound Exposure Level (SEL) was observed to decrease from 158 to 149 dB re Pa^2.s as range increased from 840 to 2400 m. The SEL spectra have peaks at around 50 to 60 Hz, which can be attributed to the delay of 16 ms be-tween explosions, and minima at around 200 Hz, which appear to be attributable to a second low-frequency cut-off in the vicinity of 600 Hz.
Shanghai Acoustics Laboratory, Chinese Academy of Sciences, P.R.China
ABSTRACT
Between 1405 and 1433 the Chinese Ming Emperor authorized Admiral Zheng He to take great fleets of vessels into the Indian Ocean to conduct trade on equality and mutual benefit. It has been known there were considerable number of ships wrecked, and lives lost, in the course of Zheng He's voyages. Searching for physical evidence of wrecks will be well worthwhile, in view of the impact this would have on our knowledge of early East-West interaction.
During the Ming period China was an advanced civilization compared with Western Europe. The port the Chinese fleets visited nearest Istanbul on the European trade route was the island of Hormuz in the Arabian Gulf. We hope to find information about these voyages by underwater exploration the approaches to the Straits of Hormuz and to understand the influence the Chinese had on Arabia and Europe in the fifteenth century.
In this paper, the Chinese-Omani joint project will be introduced. The aim of the project is to search for remains of these ships, using modern sonars. Finding such remains would throw fresh light on cultural and technological exchanges between East and West in the 15th Century. In April 2009, based on preparation for 2 years, Chinese and Omani scientists have conducted the first investigation off the coast of Oman. The equipments, survey plan and investigation result will be also introduced.
(1) Applied Marine Physics, LLC, Slidell, LA, USA (2) CyberSmiths, Inc., Miami, FL, USA
ABSTRACT
A simulation using maximal length sequences demonstrated the potential for detecting and tracking multiple near-surface targets in shallow, near-shore areas. In our simulation a low power, omnidirectional source and four omnidirectional hydrophones were arbitrarily located in water approximately 4 m deep. Using "channel digit response" processing and "block zeroing", the direct arrival, multipaths, clutter and reverberation were rejected, thereby transforming reverberation limited detection conditions into noise limited detection conditions. With the improved signal-to-interference ratio, a simple probability based algorithm demonstrated tracking of -20 dB targets at source-target distances of 250 m, the maximum range investigated.
Defence Science and Technology Organisation (DSTO), Edinburgh, SA, Australia
ABSTRACT
A technique is described by which the reflectivity of the seafloor in a shallow ocean may be obtained from inversion of received broadband acoustic signals. The technique is quite general in that the source waveform may be either impulsive or quasi-continuous. The product of the inversion is the slope, F dB/radian, of the bottom loss versus grazing angle function, which is assumed linear for small grazing angles of incidence. The technique is based on a description of the spectral statistics of the multi-path interference field in a shallow ocean, and is believed to provide a rapid, but robust, estimate of reflectivity which is adequate for many uses. Examples of application of the technique to at-sea data are shown, in which comparisons are made between measurements of transmission loss and calculations which are based on the inverted parameter. It is shown how the technique may be applied across a broad frequency range, so that estimates of broadband transmission may be made. Potential limitations of the technique are discussed.
(1) Department of Environmental Marine Sciences, Hanyang University, Ansan, Korea (2) Marine Living Resources Research Department, Korea Ocean Research & Development Institute, Ansan, Korea
ABSTRACT
Acoustic scattering by an array regularly spaced cylinders in water tank has been investigated both theoretically and experimentally. A new scattering model for a group of cylinder is proposed based upon the infinite cylinder scattering model which uses scattering directivity and phase difference. It includes interference of scattered field between cylinders. The Target Strength (TS) is calculated by Integral Intensity Method. Our proposed model is capable of describing fluctuations of TS in scattering angles that the usual incoherent summation never produces such fluctuations. Also, the scattering patterns corresponding to various scattering angles and frequencies reflect the Bragg scattering. In particular, the fact that regular patterns appeared in side-scattering is very much correlated with scatterer spacing suggest that an inverse estimation of scatter spacing may be possible by measuring the bistatic scattered field.
(1) Tongmyong University, Korea (2) Pukyong National University, Korea
ABSTRACT
In designing underwater sonar system, such as Hull-Mounted Sonar, the radiation impedance is very important design factor because it is associated to the radiation power of the system and the mutual interference force among the vibrating elements. In a practical system, the sonar has a dome to protect the arrayed elements from the underwater environment such as flow resistance and shock pressure. However, the acoustic wave from the elements is reflected on the surface of the dome, and its effect on the radiation impedance cannot be ignored. In this study, to analyze the effect of the reflected wave on the radiation impedance, we introduced a model which two vibrating elements are mounted on an infinite planar rigid baffle and a plane reflector exists in front of the baffle. Using this model, the variation of the radiation impedance with the distance between the elements, the separation from the reflector, the driving frequency, and the complex reflection coefficient of the reflector are calculated. In the calculation, the Ring function is introduced to evaluate the acoustic pressure distribution by the reflector. Finally the effect of reflected wave from sonar-dome on the radiation impedance is also investigated experimentally. The equivalent circuit model for theoretical analysis is useful to calculate the radiation impedance change by the reflected wave.
(1) Gdansk University of Technology, Poland (2) Polish Naval Academy
ABSTRACT
The main goal of this paper is to introduce the methodology of preparing the area for investigations that will be carried out at the sea. As the first step there is recognition of the basic method both in the theory as well as experimental investigation. There were taken into account the nonlinear methods. These ones are very promising methods that have very interesting features, very convenient for examinations of the seabed structure. The acoustic beam that is created by nonlinear interaction of two intense beams has very useful phenomenon that allows to use it in such kind of investigation. The main features are as follow - very narrow beam without of sidelobes and relatively low frequency of the radiated pulse. Also it is worth to mention the relatively small dimensions of the transiver set up. Of course we also should remember about the low efficiency of this kind of conversy of elastic energy that equals a few percents.
(1) Japan Coast Guard Academy, Kure, Japan (2) IIS. Univ. of Tokyo, Meguro, Tokyo, Japan (3) Hitachi Ltd., Yokohama, Japan (4) Toyo Corp., Chuo, Tokyo, Japan
ABSTRACT
Recently, possibility of terrorism from the sea front and crimes using the water space, etc. tends to increase and development of monitoring and visualizing technology in the water would be desired. An underwater visualizing technology, if it is realized, is expected to be useful not only for discovery of doubtful person or doubtful object in the water which cannot be observed until now but also for search of the sinking ship, water security and investigations in the harbor area and so on.
A 3-years promotion program, begun in 2005, entitled "Development of the underwater security sonar system" has been carried out with the support of the Special Coordination Funds for the Promotion of Science and Technology of the Ministry of Education, Culture, Sports, Science and Technology in JAPAN. In this program, we developed the ship-mounted underwater acoustic surveillance system that can watch the underwater with mobility on the sea surface. It makes possible to realize the quick transfer to the monitoring station required and the efficient watch and search by mounting the sonar on ship such as the patrol vessel. Moreover, it is also an underwater visualization sonar system with high resolution that efficiently detects and distinguishes the targets in real time by switching over the optimum frequency according to the distance from the sensor to targets or the size of target. In this paper, composition and imaging principle of the underwater acoustic surveillance system developed this time are simply described and the image results of the operational test carried out in the actual sea area by mounting the sonar on the test ship are introduced.
Dept. of Ocean Eng., IIT Madras, Chennai, India
ABSTRACT
Breaking waves are believed to be the dominant source of sea surface sound in the ocean. This paper presents the results obtained from a laboratory study of investigation made on the acoustics of breaking waves of different intensities. Measurement of the wave breaking noise in the range of 20Hz - 20kHz are presented. These experiments was carried out in a 30m long and 2m wide wave current flume with 0.8m water depth at Department of Ocean engineering, Indian Institute of Engineering Madras. Wave breaking has been generated through wave-wave interaction associated with a frequency and amplitude modulated packet. Totally, five types of plunging namely strong plunging, fine plunging, good plunging, moderate plunging and weak plunging have been generated. In case of plunging breaking, the low frequency energy components of the measured sound are more dominant. Time-frequency (Wavelet) analysis has been done to calculate the acoustically dynamic part of the breaking wave event and the contribution of the frequency bands. From the analysis, the low frequency noise components are more dominant. The measurement of the sound generated by the breaking waves could be quantitatively study the dynamics of the breaking process. And the acoustic energy radiated by breaking waves is well correlated with the rate of energy dissipation due to wave breaking.
State Key Laboratory of Acoustics, Institute of Acoustics, Chinese Academy of Sciences, Beijing, P.R.China
ABSTRACT
M-ary mode is widely used to enhance underwater communication data rate. However, in M-ary underwater acoustic communication system, it is infeasible to generate theoretically orthogonal spread-spectrum signals using sequences with finite length, and thus inter-code interference is inevitable. In order to predict the BER and provide theoretical criterion for the design of communication system parameters, theoretical formula was derived to describe the variation of BER versus several variables, such as SNR, parameters of signal and size of spread-spectrum signal set. Both simulations and theoretical analysis indicate that correlation property of spread-spectrum signal has a great effect on the BER. For a given SNR and parameters of signal, the size of spread-spectrum signal set must be limited to obtain expected BER. For a given BER, in order to obtain higher data rate, either SNR must be enhanced or parameters of signal must be adjusted.
Department of Mechanical Engineering, Naval University of Engineering, Wuhan, P.R.China
ABSTRACT
When general boundary element method(BEM) is applied to Helmholtz integral equation(HIE), integration singularity and hyper-singularity occurs. A self-adaptive Gauss quadrature algorithm was proposed to overcome the singularity. In this technique, the initial singular boundary element (father element) was divided into temporary refined small elements(children elements), and the integral on initial element was transformed to Gauss quadrature on children elements. The children elements can further be divided into smaller elements until integral solution converged at an allowable tolerance without increase boundary elements number as the refined children elements were cleared simultaneously when singular integration finished. Taking the advantages of this technique, the radiation surface can be coarsely meshed so as to reduce elements number and computational effort. Then the convergence behavior and application scope of this adaptive scheme was researched, and it is showed that this adaptive scheme can only be applied to singular or weak-singular integration. A numerical case about the sound radiation of a uniformly pulsating sphere was investigated to validate the adaptive algorithm, and numerical solutions agree well with analytical solutions with relative error less than 1.5dB. Then BEM coupled with FEM were applied to predict submarine vibration-noise considering fluid-structure interaction effects. By visualization the near-field sound pressure distribution, high sound pressure area was localized. Finally, the underwater radiated sound power was calculated and the peak frequencies were identified. Reduction of the engine periodic-isolator's stiffness can effectively transfer the sound power of peak frequencies to band-spectrum and the vibration noise of the line spectrum is controlled.
School of Naval Architecture, Ocean and Civil Engineering, Shanghai Jiaotong University, Shanghai, P.R.China
ABSTRACT
Flows along both spanwise and streamwise wavy walls display peculiar characteristics that are not observed in the flows over a flat plate surface. In the case of wavy wall flows, the periodic changes of pressure gradient and of streamline curvature generate turbulence structure different from the counterpart of flat plate flows, but the effects to reduce drag and flow noise are not same on spanwise and streamwise wavy walls. Many studies show that the spanwase wavy wall vertical to the stream flow direction is effective. On the basis of the coherent structure theory, It was proposed that the drag reducing grooved surface can not only control the spaces between two low-speed streaks to further reduce turbulent burst frequency, but also make a part or the whole quiet fluid in grooves avoid encountering the high-speed fluid from the upper layer downwash and a higher shear stress induced by it. Thus, turbulent drag reduction can be achieved. But the research about streamwise wavy wall is very few. In this paper, which kind of spanwise wavy wall or streamwise wavy wall with various wall wave amplitude could get the effect on reducing the skin drag and the flow noise are discussed.
In this paper, a numerical simulation of flow-induced noise by the low Mach numbers, the turbulent flow with a sinusoidal wavy wall is presented based on the unsteady incompressible Navier-Stokes equations and Lighthill's acoustic analogy. Large Eddy Simulation (LES) was used to investigate space-time flow field and the Smagorinsky sub-grid scale (SGS) model was introduced for turbulence model. Using Lighthill's acoustics analogy, the flow field simulated by LES was taken as near-field sound sources and radiated sound from turbulent flow was computed by Curle's integral formulation under the low-mach-number approximation. The relationship between flow noise and drag on the wavy wall is studied. Which kind of spanwise wavy wall or streamwise wavy wall with various wall wave amplitude could get the effect on reducing drag and flow noise are discussed.
(1) Institute of Sound and Vibration Research, University of Southampton, UK (2) National Physical Laboratory, Teddington, Middlesex, UK
ABSTRACT
The full far-field characteristics of an underwater ultrasonic transducer can be predicted from either a 2-D planar scan of the complex pressure in the near-field of the source or a transducer surface velocity scan. A Laser Doppler Vibrometer (LDV) can provide such a scan of the radiating surface and hence has the potential to be a fast, non-invasive method for source characterisation and, in turn, field prediction. Such measurements are, however, significantly complicated by the acousto-optic interaction – that is, the effect on the measurements of the acoustic field through which the laser beam passes. Initial examples of surface velocity measurements and field predictions are presented to show the possibilities of the approach. The results of a theoretical study of the effect of the acousto-optic artefact on LDV measurements for a circular, plane-piston transducer are also presented. The use of a transient pressure field is important both for simulation and experiment, such that measurements are made over a time window which ends before any acoustic signal reaches the water tank boundaries. The simulation results show a significant acousto-optic artefact in the surface velocity data, but also that in spite of this useful field predictions may be made for some applications.
QinetiQ North America, VA, USA
ABSTRACT
The focus of military activity has recently shifted from large area engagements to regional conflicts. Consequently, supportive Naval maritime operations have continued to evolve toward littoral warfare in complicated shallow-water, near-shore environments. This evolution requires new sensors, advanced Concept of Operations (CONOPS), and improved data-analysis capabilities, among others. Planning operations in these harsh-environment areas is difficult because accurate predictions of tactical sensor performance depend on detailed knowledge of the local environmental conditions. Tactical mission planning is thus seldom optimal or efficient - often resulting in coverage gaps, increased risk, and reduced mission success. The U.S. Navy has recently been exploring extended-life environmental sonobuoy concepts to better characterize the littoral environment. Most designs contain a thermistor string to measure ocean temperatures and also hydrophones to measure ambient noise. This type of complex sonobuoy would be far more expensive than a traditional single-measurement AXBT but it could provide a more thorough littoral environment assessment. This paper examines the trade-off between increased sensor cost and improved ASW performance, in terms of area coverage and probability of detection.
Six advantages of an extended-life combined thermistor string/hydrophone approach, compared to AXBTs and tactical hydrophones, are: 1) higher accuracy of the raw data; 2) temporal averaging to smooth out fluctuations; 3) extended area coverage during drift; 4) less chance for surface temperature anomalies (e.g., mixed-layer-depth errors) caused by various electronic and mechanical variability upon impact; 5) opportunities to discover thermal and acoustic feature boundaries during drift; and 6) less need to re-seed thus allowing longer tactical mission times. These advantages are evaluated relative to the following disadvantages: 1) increased cost; 2) potential drift outside the mission area; and 3) need for increased battery life for longer durations. The analysis is tempered by considering how a potential new system might be used. For this trade-off analysis, temperature data from the Sea of Japan were used to initialize a dynamic ocean model. A realistic dynamic ambient noise field was then created from archival data and a noise statistical model was used to add variability caused by passing ships. Then optimal initial positions for several notional buoys were determined followed by a simulation of drifting positions and collected data over several days. The analysis shows that a drifting extended-life environmental sonobuoy can provide significant improvement in environmental characterization, tactical planning, and ASW detection performance.
Faculty of Applied and Computer Sciences, Vaal University of Technology, Vanderbijlpark, Republic of South Africa
ABSTRACT
In this paper, an attempt is made for a problem of a longitudinal wave travelling through an elastic plate in water. The problem is reduced to that of an experimental and theoretical investigation of the reflected and transmitted waves. To solve the problem, a test sample of lateral dimension 202mm x 100mm x 10mm (Carbon steel) immersed in water was considered. An incident pulse was generated by a transducer on one side of the plate and received by an identical transducer on the other side of it. The generated pulse was also simulated using the 1D model in the time domain. Simulated and experimental ultrasonic traces have been used to analyze signals from the plate.
Massachusetts Institute of Technology, Cambridge MA, USA
ABSTRACT
Ultrasonic thruster (UST) is defined as a piezoelectric actuator excited by ultrasonic frequency, and generates high-power acoustic waves so as to produce bulk fluid movement for propulsion underwater. Thrust force is associated with the decrease in mean momentum flux as a result of acoustic energy loss in the medium, and can be intensified through finite-amplitude ultrasound. Most piezoelectric transducers will not completely convert all its electrical power into acoustical energy; rather, spurious heat is dissipated through the transducer surface. Indeed, heating from intense ultrasound - on the order of 100Wcm-2 - is observable even through the naked eyes as the water next to the transducer heats up. From our past research studies, we found that the UST has an inherently low acoustic efficiency and, in this study, we are interested to understand how heat loss through the transducer surface will affect its efficiency and corresponding thrust force. The water column within the insonified beamwidth is also investigated to gain new insights toward farfield heat transfer.
An experimental setup is constructed to systematically measure, using thermocouples, the temperature rate on a transmitting UST and some premeditated points along the axis of the transducer in the farfield. We investigate the summation of heat energy generated on the transducer surface and in the insonified water, and compare that to the difference between the electrical and acoustical energy. The experimental results allow us to have an accurate understanding toward whether heating in the insonified farfield is:- a result of the acoustic waves impinging on the thermocouple (false measurement), or an actual overall increase in the insonified water temperature as thought to be the case. Knowledge of the exact phenomena will help engineers to employ the right measures to minimize spurious heat loss or even exploit it through specialized nozzle appendages so as to enhance the UST efficiency.
Defence Science and Technology Organisation (DSTO), Australia
ABSTRACT
This paper describes a computer model, HANKEL, that was written by the author to explore the physics of acoustic propagation in a horizontally-stratified ocean-acoustic environment, a useful first approximation for shallow-water regions. Like other wavenumber-integral models, HANKEL computes the complex pressure field and transmission loss due to a point source at one or more field points. In addition, though, HANKEL has a 'debug' mode that enables the user to create a PDF document that illustrates the integrands involved in the calculation of the field at a given receiver point. This 'auto-documentation' feature makes HANKEL useful for the student and experienced acoustician alike, providing visual representations of the underlying mathematics. We illustrate this pedagogical use of HANKEL through examples. In particular we draw out the analogy between the classical rays of geometrical acoustics and the so-called generalized rays that are explicitly evaluated by HANKEL. No shortcuts are taken by HANKEL in computing the exact solution to the underlying wave equation, apart from the practical necessity of truncating infinite series of generalized-ray definite integrals and of obtaining approximate values for each of those integrals via numerical integration.
(1) School of Mechanical Engineering, The University of Adelaide, Australia (2) Scripps Institution of Oceanography, University of California, San Diego, USA
ABSTRACT
Approximation of acoustic Green's functions through cross-correlation of acoustic signals in the ocean is a relatively young field that has become an area of interest over the past few years. Although the amplitudes of these estimates generally differ from those of the true Green's Function, the estimated arrival structure can be highly accurate. Inter-hydrophone travel times extracted from these Green's Function estimates can therefore be relied upon for practical applications.
Acoustic data were collected for two weeks during late 2006 on an L-shaped array (combination of vertical hydrophone line array and horizontal bottom-mounted hydrophone line array) that was deployed in shallow water (~70-75m depth) on the New Jersey Shelf. The data were cross-correlated and acoustic Green's Functions were subsequently estimated. This paper describes how the inter-hydrophone travel times extracted from these Green's Function estimates were used to self-localise the array.
Successful implementation of this cross-correlation derived travel-time application provided the array geometry information necessary for both ambient and active source acoustic data recorded on the array to be useful for other analyses.
Institute of Acoustics, Chinese Academy of Sciences, Beijing, P.R.China
ABSTRACT
An acoustic propagation experiment in the Yellow Sea was fielded at summer 2009, a suit of acoustic and oceanographic sensors were deployed to collect high quality environmental and acoustic data. One goal was to investigate ocean variability effects on low frequency sound propagation. In experiment, nonlinear internal was not observed, and internal tide and high frequency internal waves were very strong. This study continues an investigation of monochromatic signal from the two south and east propagation tracks. This paper only gives one track's result. On this track, internal waves led to acoustic modes coupling. As water depth of experiment site is only 36.5 m and acoustic signal frequency is 260 Hz, only one or several acoustic modes were observed when distance of source and vertical receiving array was about 11 km. It is easy to find acoustic modes coupling appeared. During acoustic experiment, receiving sound field often had only one mode. Alternate of mode 2 and mode 1 was observed. It led to acoustic field energy appeared strong fluctuation at almost every receiving depth. The peak to peak value of energy fluctuations exceeded 20 dB. Acoustic fluctuations between upper receiving depths and lower receiving depths were opposite. Fluctuation phasic difference of upper and lower depths was about 180 degree. The transition point depth was about 24 m. This depth lied in zero value position of mode 2 and closed to maximum value depth of mode 1.
(1) Defence Science and Technology Organisation, Edinburgh, SA, Australia (2) Centre for Marine Science & Technology, Curtin University of Technology, Perth WA, Australia (3) Thales Australia, Rydalmere, NSW, Australia
ABSTRACT
Modelling the reflection of acoustic signals at a realistic ocean surface, particularly at small angles of incidence, is an area of underwater acoustics for which no known solution exists. For mid-frequencies and above (over about 1 kHz), there exist a number of complex phenomena, each of which imposes considerable complexity. These include: the two-dimensional sea surface shape formed from local wind and distant swell; acoustic shadowing of parts of the surface to sound incident at small angles; diffraction of sound into the shadow zones; bubble formation from white-caps. Models used to describe sound transmission to ranges of tens of kilometres must incorporate practical sub-models of surface loss to describe the reduction in received signal due to scattering at non-specular angles. The literature of the last several decades includes many descriptions of mathematically-based studies of surface loss phenomena, however very little of this work has resulted in models for routine use. This paper reviews the situation and shows comparisons between surface loss values obtained from practical sub-models employed by the authors with, firstly, the small-slope model made available by the University of Washington and, secondly, surface loss values inferred from use of a transmission model which includes a deterministic description of the rough sea surface. In this work, particular attention has been paid to the degree of modelling complexity which is required to capture the loss phenomena evidenced by the deterministic modelling. In this extension of an earlier study by the authors, an attempt is made to include the effects of bubbles appropriate to the sea state, via adjustment of the sound speed in the bubbly region.
National Laboratory of Acoustics, Institute of Acoustics, Chinese Academy of Sciences, Beijing, P.R.China
ABSTRACT
Horizontal correlation is one of most important parameters in ocean acoustics, which has been paid much attention in many years. However, both theories and experiments on horizontal correlations are still open topics. A theory based on both multi-path interference and ocean fluctuation to calculate the horizontal correlation is proposed. The theoretical analyses indicate that both longitudinal and transverse correlation in low frequency may oscillate with frequency. Bottom parameters has strong effects on the frequency dependence of the longitudinal correlation, and the property of the ocean fluctuation has strong effects on the frequency dependence of the transverse correlation, which can be used to estimate ocean fluctuations. A shallow water acoustic experiment was performed in 2007 to testify the theoretical predictions. In the experiment, a 500 meter length horizontal line array was used to measure the sound signal up to 2500 Hz emitted from different direction. The measured frequency dependences of the horizontal correlation agree with the theoretical predictions well, which can provide information on the bottom properties and ocean fluctuation.
(1) Tokyo University of Marine Science and Technology, Japan (2) National Research Institute of Fisheries Engineering, Japan (3) Fusion Incorporation, Japan
ABSTRACT
A highly efficient ultrasonic biotelemetry system would be desired to the ocean having a high underwater ambient noise that is especially from the temperate zone to the tropics. The four parameters are defined as the estimated factors for this system, long distance transmitting, battery life, pinger size and ability of recognition. The four parameters of the pinger are analyzed and investigated to design the optimum ultrasonic biotelemetry system. The first parameter, long distance, must be considered the transmitting frequency. The second, battery life, must be designed the effective transducer and the low power dissipation circuit. The third, smallest pinger size, must be adopted the microelectronics components. The last, high recognition, depend on the signal processing method of the transmitting system under the water.
As a result, the first need is achieved over 1,000 meters transmission, the second is realized 240 days battery life when 30 seconds repetition using the small battery SR626SW 32mAh. The third, the pinger size is designed theφ10mm and 40mm long. The last, the M sequence signal is used the pinger and the correlation processing of receiving system is adopted for the high recognition against the noise and to avoid the collision of the pingers.
The system consists of the tiny pinger and high performance receiving equipment including the transducer. The pinger can transmit the IDs and the depth information each repetition interval. The receiver process to correlate the received the M sequence signal from the pinger using the FPGA chip and calculate the direction of the pinger. The row data can be stored to the PC through the data conversion from analog to digital, 16 bits 192 kHz sampling.
The actual experimental data will be presented to obtain in the Tokyo Bay using the developed system. This research program is supported by The Japan Science and Technology Agency.
(1) Doshisha University, Kyoto, Japan (2) Hamamatsu Medical University, Sizuoka, Japan
ABSTRACT
Osteoporosis is a disease of the skeleton characterized by bone tissue deterioration, leading to an increase in bone fragility. The cortical bone is the stiff part of the osseous tissues that form bones. Its mains functions are to support the body, and to protect the organs. It contributes to 80% of the weight of human skeleton. Cortical bone is quite influenced by osteoporosis. The current gold standard for bone assessment is the evaluation of the bone mineral density (BMD) obtained by x-ray absorptiometry (DXA). However, the harm of X-ray radiation and the difficulty encountered for massive osteoporosis detection using this complex and expensive machine have motivated some investigation using non-invasive quantitative ultrasound techniques. The interaction between ultrasound and cortical bone is not well understood, due to its complex microstructure, and can depend on the position in the bone and its orientation. Another phenomenon concerning the cortical bone is not well understood: its piezoelectric characteristics.
The aim of this study is to improve the understanding of the basic characteristics of cortical bone, and its deterioration due to osteoporosis. In the first step, the relationship between BMD, obtained by X-rays, acoustic attenuation and speed in bone, obtained by quantitative ultrasound techniques, is investigated. In the second step, bone piezoelectric characteristics are investigated by imposing mechanical stress on bone samples and checking the electric response. 8 cubic cortical bone samples of 6mm radius were cut from a 27-month-old bovine left femur, at the anterior and posterior parts, and different distances from the middle of the femur. The interaction between ultrasound and cortical bone are investigated in the following experiment: A cubic 6mm radius tube is filled up with normal saline solution (NSS), and acoustic transducers are placed at the extremities. By placing the bone sample in the middle of the tube filled up with NSS, it is possible to observe the wave transmitted through the bone sample and obtain the wave speed and attenuation in bone. A moderate correlation was found between BMD and ultrasonic speed and attenuation. The piezoelectric characteristics of the bone sample are obtained by placing electrodes in opposite sides of the bone, and by checking the electric response after imposing a mechanical impulse stress. These results confirmed the relation between the bone piezoelectricity and the BMD. These experiments confirmed the link between BMD, bone piezoelectricity and ultrasonic values.
Department of Frontier Electronics and Information, Faculty of Engineering, Kanagawa University, Japan
ABSTRACT
Recently, the climate research of Ocean using Autonomous Underwater Vehicle (AUV) is being planned in Antarctic Ocean. In this paper, in order to know the characteristics of sound propagation in Lűtzow-Holm bay of Antarctic Ocean for acoustical communication of AUV, we calculate pulse waveform using parabolic equation method with inverse Fast Fourier Transform algorithm. Sound velocity profile of Lűtzow-Holm bay was obtained from Japanese Antarctic Research Expedition (JARE-31) which carried out from 1991 to 1992. We have investigated about the influence of amplitude of pulse wave by bathymetry in transverse line OW' and L' of the bay in Antarctic Ocean. We assumed that the one transmitter was placed three different depths at 50, 100, 200, 300 m and propagation range was constant at 39 km and 36 km in OW and L traverse lines. We calculate propagation pulse in Antarctic Ocean with three bottom models to know the influence of amplitude of pulse wave by bathymetry, clearly. By confirming the fluctuation of amplitude of pulse to influence by depth of source and receivers, we estimate about the amplitude of pulse by parabolic equation method to change the depth of source and receivers. As a result, the variation of amplitude of pulse wave is about -3 dB when source depth is 100 m in OW traverse line. In addition, the variation of amplitude of pulse wave is about -3 dB when source depth is 100 m in OW. In L averse line, the variation of amplitude of pulse wave is about -4 dB when source depth is 100 m. the amplitude of pulse varies from 5dB to -10dB when source depth change from 50 m to 300 m .
(1) Navoi State Mining Institute, Uzbekistan (2) Tashkent University of Information Technology (Samarkand branch), Uzbekistan
ABSTRACT
Anisotropy of the attenuation of high-frequency acoustic waves in LiNbO3 LiTaO3 (point group symmetry 3m), Bi12SiO20 and Bi12GeO20 (point group symmetry 23) crystals has been investigated on the basis of experimental data on the attenuation of acoustic waves propagating along the main crystallographic directions in these crystals. The measurements were carried out using the Bragg light diffraction on longitudinal and transverse acoustic waves at room temperature in a frequency range from 0.4 to 1.8 GHz.
According to the known perturbation theory, the attenuation coefficient can be defined in terms of the effective viscosity. Since the viscosity tensor has the same symmetry as the elastic stiffness tensor, three independent constants must be determined for the crystal class 23 and six independent constants must be determined for the crystal class 3m, to which belong the investigated crystals. The crystal cuts chosen for examination ([100], [001], [110] and [111]) gave a total of 7 different modes. The values of the viscosity components were determined by substituting effective viscosity values obtained from measured attenuation data into the mode viscosity equations. Using the calculated components of the viscosity tensor, the propagation velocity and the mode displacements it is possible to determine the attenuation coefficient of an acoustic mode along arbitrary direction in the crystal. At the same time the contribution of dielectric loss in the total attenuation coefficient was assessed for piezoactive waves in these crystals. It is shown that the dielectric loss can produce a significant influence on the magnitude and anisotropy of the damping factor for piezoactive longitudinal and transverse waves in the plane (100) in LiNbO3 and LiTaO3 crystals and in the plane (110) in Bi12SiO20 and Bi12GeO20 crystals. The obtained viscosity components were used for calculation of the anisotropy of attenuation of three wave modes propagating along any selected direction in the (110) and (001) crystallographic planes. The results can be used for the analysis and optimization of parameters of acoustic delay lines and acoustic-optical modulators.
College of Science and Technology, Nihon University, Tokyo, Japan
ABSTRACT
Currently, laser, water jet and wire electric discharge machining are used for hole machining of brittle material such as ceramics materials. The advantages of these methods are higher removal rate and machining accuracy. However, the disadvantages of these methods are that these conventional equipment is large and their structure is complex. To solve these issues, a method using ultrasonic vibration and abrasive is proposed. We are developing a new method using ultrasonic vibration of a hollow-type stepped horn for hole machining. We foresee that the equipment can be simplified and miniaturized.
The longitudinal vibration characteristics of the ultrasonic vibration source of a hollow-type stepped horn for hole machining have been studied in previous research. As a result, first resonance frequency of the ultrasonic vibration source of the hollow-type stepped horn was the same value in the case of the hollow part of depth of 1/4 wavelength in all cases of cross-sectional ratio. The amplification factor was decided by cross-sectional ratio in the case of the hollow part of depth of 1/4 wavelength. The hollow-typed stepped horn in the case of the hollow part of depth of 1/4 wavelength has the best shape in all cases of the cross-sectional ratio. Second, the amplification factors are proportional to the cross-sectional ratio, but if the amplification factor exceeds 4.6, it was not proportional to the cross-sectional ratio. Third, the longitudinal vibration added the static pressing force has a resonance of 1/2 wavelength at this hollow-type stepped horn length in all cases. In this study, ultrasonic vibration sources of a hollow-type stepped horn with a vibration converter of diagonal slits are used. The longitudinal and torsional vibration characteristics of the horn are clarified and the shape of the horn is examined. The relationship between cross-sectional ratio and the longitudinal and torsional vibration amplitude are discussed in the case of horns with a vibration converter of diagonal slits in solid part. As a result, the longitudinal vibration amplitude is proportional to the cross-sectional ratio, but the torsional vibration amplitude is the same value in all cases of cross-sectional ratio. And, the relationship between diagonal slits position and the torsional vibration amplitude are discussed. As a result, the torsional vibration amplitude of the tip side becomes large in the cases of diagonal slits in hollow part.
Physics Department, University of Allahabad, Allahabad, India
ABSTRACT
Ultrasonic attenuation in Cemented Carbides which includes NbC, TaC, ZrC and TiC along <110> direction which occurs due to phonon-phonon interaction and thermo-elastic relaxation is studied at higher temperatures. The second and third order elastic constants (SOEC and TOEC) are calculated for the determination of the attenuation. It is found that the attenuation increases with temperature along <110> direction. Thermo-elastic loss is found very small in comparison to phonon-viscosity loss. Also it is found that general trend of the temperature variation of the attenuation is same as in the pure metals and NaCl type crystals. These results may be used for the characterization of the material during the processing.
Faculty of Electrical Engineering, Czech Technical University In Prague, Czech Republic
ABSTRACT
Physical characteristics of liquids and their accuracies were determined using an ultrasound resonator. For the beginning the influence of different experimental settings (different liquids, different cells, kind and area of used transducers, its position and different ways of their sticking in the cells walls) to obtained frequency spectra were studied. On the base of these experiments the proposal of an optimal experimental setting was prepared. The fluid physical property characterisation was obtained on the base of swept frequency acoustical interferometry. The positions of local frequency maxima and its widths were assessed and analysed. Sound speed, attenuation and liquid density were calculated. The accuracies of them were established. Additionally, the prepared experimental set-up was used for stu-dent's laboratory education.
Department of Mechanical Engineering, University of Bristol, Bristol, UK
ABSTRACT
In this paper, a creep-damaged material is modelled as a two-phase composite material comprising a matrix and a distribution of clustered spherical voids. The voids are dispersed uniformly within oblate ellipsoidal regions that represent preferred regions of voiding close to grain boundaries. In turn, the ellipsoidal regions are distributed randomly in the matrix. A double composite model based on coherent elastic wave propagation is used to determine the effective stiffness and the overall density of the two-phase material. As the creep progresses, the ellipsoid elements are sparsely scattered in the matrix, but they continue to grow in volume, containing more and more voids within them. This evolution results in an anisotropy increase due to the preferential void formation within the ellipsoid elements. Velocity estimates can be used to predict the elastic softening and the development of anisotropy, providing bulk-average in-formation pertinent to the assessment of creep damage.
(1) Department of Physics, Pukyong National University, Busan, Korea (2) Department of Multimedia Engineering, Tongmyong University, Korea (3) Department of Electronic Engineering, Dongseo University, Korea
ABSTRACT
The transient ultrasonic fields and the B-mode images of a linear array medical transducer which has a few defective elements were obtained by simulation and the results were compared with those of a normal transducer. The center frequency of the transducer was 7.5 MHz and the acoustic beam was formed by 64 active elements including the de-fective ones among the total number of 192. It was shown that the fields by the transducer with defective elements spread widely on the lateral direction due to enhancement of sidelobe level. The spurious images beside that of a point target were appeared and the lateral spatial resolution was degraded significantly with increment of the number of defective elements.
George W. Woodruff School of Mechanical Engineering, UMI Georgia Tech, Metz-Technopole, France
ABSTRACT
The technique of back-propagating plane waves to reconstruct the acoustic field before it reaches the spatial area where it is measured, is used to image the field in the region consisting of the corrugated interface between an solid and a liquid. Numerical experiments are performed applying the finite element technique to establish realistic situations. It is verified to what extent this method enables a correct reconstruction of the interface as a function of width and height, taking into account the possibility of internal reflections. The results are important to determine in what regime the plane wave expansion technique, first developed by Lord Rayleigh and nowadays often used in phononic crystals, in diffraction simulations etc., is applicable.
(1) Ioffe Physical-Technical Institute, St.Petersburg, Russia (2) Institute of Physics, Warszawa, Poland
ABSTRACT
We have already reported that an acoustoelectric (AE) effect (drag of charge carriers in conducting media by travelling acoustic wave) produced by a surface acoustic wave (SAW) in magnetoresistive manganite films contains, along with the ordinary odd component, an anomalous one which is even in the SAW wave vector. The anomalous effect dominates near the metal-insulator transition in manganites, while the ordinary effect prevails at high and low temperatures. The anomalous AE effect appears to be caused by strong modulation of the film conductivity produced by elastic deformations carried by the SAW. In the present contribution we report on investigations of the influence of high magnetic fields H up to 14 T on the acoustoelectric effect in La_{0.67}Ca_{0.33}MnO_{3} films grown by laser ablation technique on piezoelectric LiNbO_{3} substrates. The SAW was launched along the surface of the LiNbO_{3} substrate and detected by two interdigital transducers. The AE effect has been measured at the SAW frequency of 90 MHz by the lock-in technique in the temperature range 4.2-300 K using the superconducting solenoid.
Our studies have shown that the total AE voltage substantially changes in the magnetic field applied parallel to the SAW wave vector. These changes are most noticeable in the behavior of the anomalous even component of AE effect, while the odd AE voltage demonstrates smooth monotonic drop in the magnetic field. Anomalous AE voltage initially grows in low fields (up to 1 T). Further increase of H results in sharp decrease and disappearance of anomalous AE effect in the field of about 8 T. The field dependence of the odd AE component can be explained in the frame of the classical theory of the acoustoelectron interaction, by the opposing effects of increased film conductivity in H (magnetoresistance) and proportionally reduced SAW attenuation in the film. The behavior of the anomalous even component of AE effect in low fields remains unclear and takes further investigations. With regard to the behavior of the even AE component in high magnetic fields, we speculate that its sharp decrease is determined by the magnetic field induced changes in the piezoresistance (change of the conductivity under the deformation) of manganites.
(1) Dept. of Physics, Urumu Dhanalakshmi College, Tiruchirappalli, India (2) Dept. of Physics, Seethalakshmi Ramaswamy College, Tiruchirappalli, India
ABSTRACT
Acoustic signals couple to liquids via fundamental parameters such as density and viscosity. Variations in the relevant liquid parameters may result in the variations of thermodynamic parameters such as Internal pressure and Free volume. The variation in the sample properties are investigated to understand the structural influence of the solute on the solvent. In the present study, measurements of ultrasound velocities, densities and viscosities over a concentration range 0.001m to 0.15m of non-aqueous formamide solutions of calcium salts of some organic acids in the temperature range 50C to 550C have been made to compute thermodynamical parameters to reveal the nature of interactions between the components of the solutions. Further, the compressibility behaviour in a solution throws light on the solute-solvent interactions. The solvation number has been calculated to determine the interactions taking place in the solutions. The result obtained from compressibility method is seen to agree well with the results from different theoretical and experimental methods.
Institute of Telecommunications, Teleinformatics and Acoustics, Wroclaw University of Technology, Wroclaw, Poland
ABSTRACT
There exists some possibilities for simultaneous delivery of laser radiation and ultrasounds of low frequency and high intensity: introducing ultrasound oscillations in the optical fiber by the stiff fixing of the fiber to the vibrating element and non-contact influence of the ultrasonic wave on the laser beam. The paper presents the results of experimental studies of transmission of ultrasonic wave in optical fibres using sandwich type ultrasonic transducer. It also presents amplitude characteristics of an ultrasonic signal propagated in an optical fibre. The effect the length of the fibre on the achieved output signal amplitudes was studied. The relation of the output signal of a capacitive sensor to the power applied to the sandwich transducer was presented. The reflected power during ultrasonic wave propagation in an optical fibre was also measured. The measurements of the ultrasonic wave transmission were performed for single-mode and multi-mode step-index optical fibres.
The article presents also the results of Matlab simulations and experimental studies of non-contact influence of the ultrasonic wave on the laser beam. A role of the air gap and its influence on laser-ultrasonic transmission in optical fiber was examined. Two optical fibers were applied with the use of the air gap between them. One fiber was attached to the laser diode and crosses through the hole in the sandwich transducer and in the velocity transformer. In the velocity transformer, after leaving small air gap, to the end of the transformer the other optical fiber was attached. Ultrasounds cause the changes of the length of the air gap. The amplitude of the modulated signal gets smaller with increasing of the distance between optical fibers. Moreover the acoustic wave causes the change of the refractive index of light and the equivalent of the Bragg grating occurs. Differences of the length of the air gap and the occurrence the equivalent of the Bragg grating enable the phase and the amplitude modulation of the laser radiation. Advantages and disadvantages of both mentioned above solutions were discussed.
Precision and Intelligence Laboratory, Tokyo Institute of Technology, Japan
ABSTRACT
Silica nanofoam is a porous material with a nanometer-scale structure produced through a sol-gel process, and is used as a heat insulator. It is expected that the nanofoam may work as a good acoustic matching layer of an airborne ultrasonic transducer for highly sensitive and wideband ultrasound transmission/detection since the nanofoam has an extremely low acoustic impedance. The nanofoam may also have a possibility as an acousto-optic device because of its very low sound speed and optical transparency.
In this study, we experimentally estimated the fundamental acoustic and acousto-optic characteristics of the nanofoam as functions of the density through acousto-optic measurements. A piezoelectric transducer is attached to a sample silicon nanofoam of 10x10x5 mm3 , and radiates a longitudinal sound wave into the sample. A He-Ne laser light at the wavelength of 632.8 nm was emitted through the sample in the direction perpendicular to the propagation of the ultrasound. Diffraction of the light wave by the ultrasonic waves was observed. The Raman-Nath diffraction occurs at a relatively low frequency since the sound speed is low, and the experiments was carried out at 510 kHz. The diffraction pattern agreed well with the Raman-Nath diffraction theory, and the sound speed was estimated from the diffraction angle. The sound speed varied 55~178 m/s for the sample density of 100~300 kg/m3. The measured sound speed almost agreed with the sound speed calculated from the averaged density and bulk Young's modulus. Intensity ratio of the first order diffracted light to the fundamental light was 1 to 4 when the input ultrasonic intensity was 9 W/m2 using a 200-kg/m3 sample. This shows that the nanofoam has high acousto-optic efficiency than other conventional material.
Nihon University, Chiyoda-ku, Tokyo, Japan
ABSTRACT
We experimentally verified the method by which the liquid that had entered in a long pore with open ends could be removed out of the pore by using the acoustic radiation force. If a liquid enters in a long pore, it is generally kept staying there by the retaining force produced by the capillary phenomenon. Any of the conventional methods such as applying acceleration to a pore itself, blowing a compressed gas into a pore, or absorbing a liquid out of a pore is now used to remove the liquid out of the pore. If the first method is used, complicated works are required. If the second or third method is used, a strong air flow is produced so that the method cannot be used if a delicate object is present in the air flow. To solve these problems, we considered the method using the acoustic radiation force of an aerial ultrasonic wave. By using this method, the liquid that has entered in a long pore with open ends could be pushed out of the pore by irradiating a high-intensity ultrasonic wave. Because any strong air flow was not produced, this method had a very small influence on the areas around the pore. We made the experiment in which the point-convergence type source of aerial ultrasonic waves at the frequency of 20 kHz was used to emit the ultrasonic wave onto the liquid having entered in a long pore with open ends, and removed it out of the pore. The intensity of the ultrasonic wave was about 6 to 10kPa. By using this method, it was confirmed that the liquid which had entered in pores of 1.5mm to 5.0mm in diameter and 3mm to 20mm in length could be instantaneously removed out of the pore. Thus, we revealed that the liquid having entered in the pore could be instantaneously removed out of the pore by irradiating the opening of the pore with a high-intensity ultrasonic wave.
(1) Department of Physics, BN PG College, Rath, Hamirpur, Bharat, India (2) Department of Physics, Govt. Girls P. G. College, Banda, Bharat, India
ABSTRACT
Ultrasonic velocity and attenuation parameters are well connected to the micro- structural and mechanical properties of the materials. Ultrasonic absorption coefficients can be used for non-destructive techniques to characterize the materials. The most important causes of ultrasonic attenuation in solids are electron-phonon, phonon-phonon interaction and that due to thermo elastic relaxation. At room temperature, electron mean free path is not comparable to phonon mean free path and no coupling will take place. Thus the attenuation due to electron-phonon interaction will be absent. The two dominant processes that will give rise to appreciable ultrasonic attenuation at higher temperature are the phonon-phonon interaction and that due to thermo elastic relaxation. Both type of attenuation are observed in calcium oxide crystal. It has been established that at frequencies of ultrasonic range and at higher temperatures in solids, phonon-phonon interaction mechanism is dominating cause for attenuation. The temperature dependent part of ultrasonic attenuation has been explained in terms of model where the acoustic phonon interacts with a number of thermal phonons in the lattice.
CaO is a key ingredient in the nixtamalization process used to create corn hominy and tortilla dough. Calcium oxide is used for many construction purposes, as in the manufacture of bricks, mortar, plaster, and stucco. Its high melting point makes it attractive as a refractory material, as in the lining of furnaces. The compound is also used in the manufacture of various types of glass. Common soda-lime glass, for example, contains about 12% calcium oxide, while high-melting alumino- silicate glass contains about 20% calcium oxide. One of the new forms of glass used to coat surgical implants contains an even higher ratio of calcium oxide about 24%. The CaO crystal possesses well developed structure of the NaCl-type and is divalent in nature. Oxides and silicates make up the bulk of the Earth's mantle and crust, and thus it is important to predict their behaviour. In this work ultrasonic attenuation due to phonon-phonon interaction (α/f2)p-p and thermo elastic relaxation (α/f2)th are studied in CaO crystal from 100K-1500K along different crystallographic directions. For the evaluation of the ultrasonic coefficients the second and third order coefficients are also calculated using Coulomb and Born Mayer potentials utilizing nearest neighbour distance and hardness parameter data. Several investigators have given different theories; here the one given by Mason has been used. Mason's theory relates the Gruneisen constants with SOECs and TOECs. Temperature dependence of ultrasonic absorption in CaO crystal along different crystallographic direction reveals some typical characteristic features.
(1) Department of Physics, BN PG College, Rath, Hamirpur, Bharat, India (2) Department of Physics, Bundelkhand University, Jhansi, Bharat, India
ABSTRACT
The study of higher order elastic constants has gained new horizons with the development of material science as they play primary role for understanding the anharmonic and non linear properties of solids. The information about these constants is valuable in understanding nature of short range forces in crystals. The elastic energy density for a deformed crystal can be expanded as a power series of strains using Taylor's series expansion. One can get this expansion starting from nearest neighbour distance and hardness parameter utilizing Coulomb and Born-Mayer type central force interactions for face centred cubic crystals. The coefficients of the quadratic, cubic and quartic terms are known as the second, third and fourth order elastic constants (SOECs, TOECs and FOECs) respectively. When the values of second and third order elastic constants and density for any material at a particular temperature are known; one may get ultrasonic velocities for longitudinal and shear waves in different crystallographic directions which give important information about its anharmonic properties. While obtaining higher order anharmonicities such as Grüneisen numbers, the first order pressure derivatives of second order elastic constants (FOPDs of SOECs), the first order pressure derivatives of third order elastic constants (FOPDs of TOECs), the second order pressure derivatives of second order elastic constants (SOPDs of SOECs), partial contractions and deformation of crystals under large forces, the third and fourth order elastic constants are considered extensively.
A proper and systematic evaluation of the elastic constants of isostructural oxides, and their dependence on temperature provides the fundamental data for determining the characteristics of cation-oxygen bonding interactions which are pertinent to the understanding and theoretical modelling of more complicated oxide compounds. The Tellurium Oxide (TeO) is a divalent crystals and possess FCC crystal structure. In this work, an attempt has been made to evaluate higher order elastic constants for TeO at an elevated temperature starting from 50K to near its melting point. The melting point for TeO is 643 K. The data of SOECs, TOECs and FOECs are used to evaluate the FOPDs of SOECs and TOECs, SOPDs of SOECs and partial contractions. While evaluating these properties it is assumed that the crystal structure does not change during temperature variation. The data of these oxides obtained through different techniques also give important and valuable information about internal structure and inherent properties of materials and can be used in future for different industrial purposes and further investigations of divalent FCC structured solids.
(1) Vignan Institute of Technology & Aeronautical Engineering, Deshkuki, India (2) University College of Science, Hyderabad, India
ABSTRACT
Kinetic Studies have been carried out by measuring ultrasonic velocities (v) in the mixing of acids like ortho cresol and bases like aniline with esters like ethyl acetate. The ultrasonic velocities (v) were measured using ultrasonic Pulse Echo System (Model 440 - M) on mixing of different solvents at various concentrations and at different temperatures. The results are interesting because increase in the basic content in the binary mixture increases the ultrsonic velocity (v), Where as increase in acid content decreases the ultrasonic velocities. Kinetically the mixing process is first order in the reactive component , acid or base. Arrhenius parameters were computed for o - cresol - Ethyla acetate and Aniline - Ethyl acetate systems and energy of activation,enthalpy of activation, Free energy of activation and entropy of activation are the right order of magnitude for these kinetic processis.
(1) Graduate School of Systems and Information Engineering, University of Tsukuba, Kanto, Japan (2) Department of Pure and Applied Physics, Faculty of Engineering Science, Kansai University, Japan
ABSTRACT
Phononic crystals have various characteristics, like band gap, group delay and negative refraction. Among them, we regard the negative refraction. Focused ultrasounds using negative refraction by phononic crystals are investigated by many researchers. The focused ultrasound is expected in the medical field and so on. However, when the ultrasonic wave propagates in the phononic crystal, the wave attenuates acutely. After once the crystal is composed, the focal length is fixed. It is desired to vary the focal length of phononic crystal for such fields. In our previous research, we proposed the dual structured phononic crystal. This structure has a gap between the two phononic crystals. It was verified that the focal length was varied by changing the thickness of the gap. Additionally, it was confirmed that the attenuation of this proposal structure is lower than that of a single phononic crystal of the same thickness. In this research, we examined the relationship between the characteristics of focused ultrasound by layer structured phononic crystal and the crystal structure of each layer, using finite element method (FEM) and the phononic crystal band structures. As a result, we obtained more efficient crystal structure for focused ultrasound by layer structured phononic crystal. Experimental verification is our future work.
Nihon University, Tokyo Chiyoda, Japan
ABSTRACT
We developed a method of determining the effect of heat on mortar samples by analyzing the vibrations of mortar samples that were exposed to high temperatures of about 500℃ to 1,000℃ and excited in a non-contact manner by using a high-intensity aerial ultrasonic wave with finite amplitude (at a fundamental frequency of 20 kHz). If a high-intensity aerial ultrasonic wave is emitted onto the surface of an acryl resin or metallic plate, the plate may be excited by the ultrasonic wave in a non-contact manner and vibrate at the same frequency as the emitted ultrasonic wave. If the ultrasonic wave has finite amplitude, it contains harmonic components specific to the ultrasonic wave. If the plate is irradiated with the ultrasonic wave, therefore, it may produce vibrations at frequencies corresponding to the frequency components of the ultrasonic wave. By using such a high-intensity aerial ultrasonic wave with finite amplitude (at a fundamental frequency of 20 kHz, and a sound pressure of 6 to 10 kPa), we attempted to determine the variation in the properties of an object that was exposed to and affected by the high heat of a fire for example. The target object used for this paper was an object made of mortar, which is a material typically used in construction. Mortar samples irradiated with an ultrasonic wave were vibrated at frequencies corresponding to the fundamental frequency (20 kHz) and plural harmonic components (40 kHz to 100 kHz) of the ultrasonic wave. It was found that the mortar samples influenced by the heat had clearly different vibration velocity rates respectively produced by the ultrasonic emission at each frequency. Thus, we could determine the effects that heat had on the mortar samples exposed to it.
Donetsk National University, Ukraine
ABSTRACT
The problem of ultraacoustic wave propagation in piezoceramic cylinders remain an interesting one since such materials are widely used in acoustoelectronics and ultrasonic nondestructive evaluation. In the current paper we study the wave propagation in piezoceramic multilayered cylindrical waveguides with noncircular cross-section. Infinite transversely isotropic cylinders considered here have circular or hollow cross-sections with sector cut of any angular measure arbitrary boundary conditions on surfaces. The method is based on exact analytical integration of wave equations of linear electroelastic medium by using wave potentials. Dispersion functions are obtained from boundary conditions in an analytical form of functional determinants and numerical analysis is carried out to illustrate the approach. The effect of various mechanical parameters is studied and the potential applications are discussed. Results are compared with those published earlier in order to check up the accuracy of the proposed approach, which is found to be very accurate and efficient.
Department of Physics, B.N.V. Post-Graduate College, Hamirpur, Uttar Pradesh, India
ABSTRACT
Data on elastic constants and associated properties at high temperature (100-1000K) for MgO, MgSe and MgS crystals are presented and discussed starting from primary physical parameters viz. nearest neighbour distance and hardness parameter assuming long- and short- range potentials. When the values of the higher order elastic constants are known for a crystal, many of the anharmonic properties of the crystal can be treated within the limit of the continuum approximation in a quantitative manner. If the values of second order elastic constants and density at a particular temperature are known for any substance, one may obtain ultrasonic velocities for longitudinal and shear waves which give an important information about its internal structure, inherent and anharmonic properties. Though compendiums of elastic constant data for numerous compounds exist, they are restricted to temperatures at or near room temperature. Current problems in material science often require values of elastic constants at elevated temperatures. The compounds of magnesium have attracted a lot of interest due to their complex physical and chemical characteristics. In past years, the physical properties of these substances have been studied. However, none of the work reported in the literature so far is centered on the study of temperature variation of anharmonic properties. Therefore, in this study, higher order elastic constants and related properties are computed upto 1000K for MgO, MgS and MgSe. The first order pressure derivatives of second and third order elastic constants, the second order pressure derivatives of second order elastic constants and partial contractions are also evaluated at different temperatures. The results thus obtained are compared with other available data and found in well agreement with present values.
(1) Physics Faculty, Vilnius University, Vilnius, Lithuania (2) Institute of Solid State Physics and Chemistry, Uzhgorod University, Ukraine
ABSTRACT
The layered crystals of CuInP2S6 family are the promising materials for the applications in functional electronics, because they exhibit ferroelectric and semiconductor properties as well as high ionic conductivity]. These crystals crystallize in a layered two-dimensional structure of the CuMP2S6 (M=In, Cr,Bi) type. Recently the new crystals and solid solutions were obtained after substitution Cu to Ag, In to Cr ,Bi or S to Se. These compounds also exhibit rich variety of piezoelectric properties, photosensitivity and ionic conductivity. In this contribution we present the experimental investigation results of phase transitions, elastic and piezoelectric properties of these crystals and solid solutions. The ultrasonic pulse-echo method was applied for velocity and attenuation measurements and piezoelectric sensitivity detection. The ultrasonic attenuation and velocity critical behaviour was observed in phase transition region. Piezoelectric sensitivity was measured when receiving ultrasonic transducer in conventional pulse-echo method was replaced by thin plate of crystal under investigation. In this case the piezoelectric sensitivity anomalies corresponding to elastic anomalies were observed. In pure CuInP2S6 crystals piezoelectric sensitivity exists below the first order ferroelectric phase transition at Tc = 312 K. After substitution In to Cr and S to Se phase transition temperature decreased and intermediate phases, possibly incommensurate, appeared above ferroelectric phase transition point. Substitution Cu to Ag leads to the shift down of ferroelectric phase transition and the transition type changes to the second order. In polarised layered ferroelectric crystals the piezoelectric effect was large enough, what promises possible ultrasonic transducer applications. In pure AgInP2S6 and AgInP2Se6 crystals we did not observe phase transitions and ferroelectricity in the temperature range of 100 - 300 K. However, in these crystals piezoelectric sensitivity was observed in bias electric field applied along c-axis.
(1) Kanagawa University, Yokohama, Japan (2) Asahi EMS Co. Ltd, Tokyo, Japan
ABSTRACT
Ultrasonic complex vibration welding of same and different metal specimens and structures of the welded area are studied using several complex vibration welding systems, scanning and transmission electron microscopes (SEM and TEM). Ultrasonic welding can weld various metal directly using vibration and static clamping pressure. The welding area is limited very narrow layer and can weld different metal specimens which have different melting temperature and difficult to weld by usual welding methods such as resistance welding. Ultrasonic complex vibration welding of two-dimensional vibration locus could be used for joining different metal specimens at multiple positions continuously and has superior quality compared with conventional ultrasonic welding with linear vibration locus. Welding of aluminum-copper, aluminum nickel plate specimens and aluminum alloy is essential for fuel cell, multi-layer battery or EDLC capacitor electrodes for electric or hybrid automobile and other various industry fields. For large electric current devices, multiple spot or seam welding are required.
Ultrasonic complex vibration welding systems of 15 to 40 kHz were developed using (1) multiple transducers integrated with a transverse vibration disk, (2) complex vibration converter with diagonal slits and (3) longitudinal-transverse vibration rod driven eccentrically by longitudinal vibration source. Elliptical to circular vibration loci are obtained at the welding tip and they are driven using several 500 W (1), 2 kW (2) to 5 kW (3) power amplifiers. Required vibration velocity and damage by vibration fatigue are small compared with conventional welding. Using the ultrasonic complex vibration welding systems, aluminum, copper, aluminum-copper and aluminum-nickel plate specimens were welded directly successfully at continuous multiple positions. Structure of these welded areas are observed using SEM and TEM. By observations of TEM images of cross sections of welded specimens, it was shown that these specimens were joined directly without any oxide, inter-metallic compound, mutual diffusion and any different structures. Required vibration velocity was one-third to quarter compared with conventional welding and weld strength near to material strength was obtained independent of specimen position and direction, and multiple or continuous welding is possible. Alumina coated aluminum alloy specimens were welded using complex vibration. The coated alumina layer was broken roughly in initial welding process and furthermore, broken into small alumina particles by ultrasonic complex vibration and finally dispersed throughout in welding specimens.
Department of Applied Physics, AMITY School of Engineering and Technology, Bijwasan, New Delhi, India
ABSTRACT
The ultrasonic properties of the hexagonal closed packed solid Ag-Zn alloys (Zener alloys) have been studied at room temperature for their characterization. For the investigations of ultrasonic properties, we have also computed second order elastic constants using Lennard-Jones potential. The velocities V1 and V2 have maxima and minima respectively with 450 with unique axis of the crystal, while V3 increases with the angle from unique axis. The inconsistent behaviour of angle dependent velocities is correlated to the action of second order elastic constants. Debye average sound velocities of these alloys are increasing with the angle and has maximum at 550 with unique axis at room temperature. Hence when a sound wave travels at 550 with unique axis of these alloys, then the average sound velocity is maximum. Achieved results have been discussed and compared with available experimental and theoretical results
Sonochemistry Centre, School of Health and Life Sciences, Coventry University, UK
ABSTRACT
The application of flow cytometry in combination with fluorescent dyes has become an important analytical tool for microbiological assays in many fields such as biotechnology, environmental remediation and water treatment in the food, pharmaceutical and environmental sectors. It has been shown that ultrasound has a great effect on bacterial inactivation due to the effects of the formation and collapse of acoustic cavitation bubbles which result in microbial inactivation and cell disruption. In this current study we report the effect of sonication at two frequencies (20 and 580 kHz) on the viability of gram negative (Escherichia coli,) and gram positive bacteria (Staphylococcus aureus).Results were analyzed using both standard plate count to determine colony formation units (CFU/ml) and flow cytometry. Flow cytometry has the advantage that it can enumerate bacterial populations in two different sub-populations (live and dead bacteria) in the bulk liquid whereas CFU analysis records only viable populations capable of reproduction. Results illustrate a good correlation between viable counts and flow cytometry. However, flow cytometry data reported higher bacterial numbers compared with CFU. This can be attributed to CFU recording single and aggregate colony forming units whereas flow cytometry tends to break apart agglomerates in the flow and thereby register mainly single cells. Flow cytometry for E. coli demonstrated a high sensitivity to 20 kHz frequency with a continuous decrease in viable cells and an increase in dead cells during experiments. However for the gram positive species S. aureus no significant effect on bacterial inactivation was observed at this frequency. In contrast to these results, at the higher frequency of 580 kHz and at different power settings only declumping rather than bacterial inactivation was observed for both bacterial species.
(1) Centro Tecnológico AINIA, Paterna, Valencia, Spain (2) Grupo de Ultrasonidos de Potencia, CSIC, Madrid, Spain
ABSTRACT
Supercritical fluid extraction (SFE) is an industrial technology that is based on the solvent power that some fluids exhibit under pressure and temperature above certain values named as critical point. This process, using supercritical CO2 as solvent, has gained wide acceptance in the last decades, because of its advantages compared to conventional solvent extraction ones (high selectivity, non toxic, inert, suitable to extract thermolabile substances, cheap, recyclable). One of the main difficulties of SFE is to achieve favourable kinetics because of the fact that mechanical stirring is not easy to be applied to an extractor vessel operating at high pressure. In this context, an interesting alternative is the use of power ultrasound. Ultrasonic radiation represents an efficient way to enhance mass transfer processes, because of some mechanisms such as microstirring, compressions and decompressions in the material, heating, and/or cavitation. Previous works of this research group pointed out the feasibility of integrating an ultrasonic field inside a supercritical extractor without losing a significant volume fraction. Moreover, a new self-controlled prototype, robust enough to fulfil industrial requirements to produce it commercially was developed and tested under supercritical conditions, giving rise to a non-antecedent patent. This new ultrasonic device led to notable enhancement both on extraction yields at certain times and on required time to achieve a certain extraction yield when applied on SFE almond oil. Some experiments carried out gave rise to yields 20% greater than those without ultrasounds.
In order to deepen in the knowledge of this new technology, the aim of this work was to study the effect of High Power Ultrasounds (HPU) on mass transfer zone (MTZ) in the supercritical extraction. For this purpose, different tests have been performed to assess the effect of HPU on SFE of oil from milled almonds (3-4 mm particle size). To examine the effect of the acoustic waves all experiments were performed with and without ultrasound at identical pressure, temperature and flow rate conditions. In this work, the effect of high-intensity waves on mass-transfer zone based on oil concentration profiles at different times and bed heights are discussed.
(1) School of Chemistry/Department of Chemical and Biomolecular Engineering, University of Melbourne, Victoria, Australia (2) Dairy Innovation Australia Ltd., Werribee, Victoria, Australia
ABSTRACT
The ability to tailor the functionality of dairy systems is one of the key factors in the manufacture of dairy products. Of utmost importance is the ability to withstand high processing temperatures without excessive thickening and coagulation. Reconstituted whey protein concentrate (WPC) and isolate (WPI) solutions were batch treated with high intensity (power = 31 W) ultrasound (20 kHz) for various periods of time ranging from 5 s - 60 min. Sonication of reconstituted WPC solutions reduced the particle size distribution from ~10 - 30 m to ~1 - 3 m, whereas sonication of reconstituted WPI solutions led to a reduction from ~10 μm to ~ 1 - 2 μm. Particle size reduction was greatest within the first few minutes of sonication followed by a slower and gradual decrease with increasing sonication time. The change in the particle size of WPC solutions corresponds with a reduction in viscosity by up to 20%. This effect was attributed primarily to the physical forces generated during acoustic cavitation, which led to a decrease in the disperse volume fraction denoted by the significant size reductions. In an alternative and novel approach, ultrasound was used to improve the heat stability of whey proteins. This required pre-heating of WPC solutions reconstituted to 5% solids at 80C for 1 minute followed by sonication at a frequency of 20 kHz for 5 s - 20 min. In conclusion, ultrasound may become a useful tool in the treatment of dairy products to control functionality and to prevent excessive thickening and/or coagulation of whey proteins.
University of Illinois at Urbana-Champaign, USA
ABSTRACT
Consumers' need for safe and minimally processed foods continues to drive the food industry to pursue new and mild food processing and preservation technologies. Ultrasound is one such technology that might provide safe, fresh, and tasty and nutritious foods for consumers. Power ultrasound has been found to be effective in microbial and enzyme inactivation, bio-component separation, interface heat and mass transfer enhancement, homogenization, cutting, and extraction of bioactive component(s) in foods and plants. Due to new developments in ultrasound technology, as well as our increased understanding of cavitation phenomena, there has been increased interest in recent years to examine the use of ultrasound as an alternative food processing and preservation tool. Combining sonication with other treatments, such as pH, mild heat, and low pressure, has been found to enhance the efficacy of an ultrasound treatment. Usually the process efficiency is improved, especially for microbial and food enzyme inactivation when ultrasound is combined with heat (thermal sonication) or static pressure (mano-thermo-sonication). Concerns that the food industry has about the application of ultrasound as a food processing method include the quality of the foods treated with ultrasound, as well as the scale-up and economic issues. A summary about ultrasound technology and our understanding of the mechanisms of how ultrasound works, together with information explaining the benefits and pitfalls of power ultrasound as an alternative food preservation and processing method will be presented.
Grupo de Ultrasonidos de Potencia, CSIC, Madrid, Spain
ABSTRACT
Ultrasonic processing is becoming an increasingly attractive field due to the sustainable character of the use of ultrasonic energy: low energy consumption and no contaminating processes. However the applications of ultrasonic energy in fluids, and more specifically in gases, have been limited for the difficulty to generate efficiently such energy in large scale processes. To overcome this problem a new family of ultrasonic generators with extensive radiators were developed during the last years. Such new generators opened possibilities to study and implement new processes in fluids at industrial level. This paper deals with the characteristics and performance of some new sonoprocesses in gases and multiphase media developed at industrial and semi-industrial level. The application of sonic and ultrasonic energy for environmental and industrial purposes will be presented as well as the performance of the ultrasonic systems in processes such as defoaming, treatment of particle suspensions, etc.
(1) Grupo de Análisis y Simulación de Procesos Agroalimentarios, Departamento Tecnología de Alimentos, Universidad Politécnica de Valencia, Valencia, Spain (2) Grupo de Microestructura y Química de Alimentos, Departamento Tecnología de Alimentos, Universidad Politécnica de Valencia, Valencia, Spain (3) Grupo de Ultrasonidos de Potencia, CSIC, Madrid, Spain
ABSTRACT
Power ultrasound is a novel technology to be applied in drying processing taking energy saving aims into consideration. Mechanical effects associated to ultrasonic application involve an improvement of mass and heat transfer mechanisms, which may be related to microstructural changes in foodstuffs. Literature reports a high significance of the product's porosity to determine the influence of ultrasound on the drying process. Thereby, the main goal of this work was to address from a kinetic and microstructural point of view the influence of power ultrasound application on convective drying of a high porosity product like eggplant. Convective drying kinetics of eggplant cylinders (height 20 mm and diameter 20.4 mm) were carried out at 40 C and 1 m/s. Trials were also conducted at the same experimental conditions applying ultrasound at different acoustic power levels ranging from 15 to 90 W. All these samples were analyzed by Cryo Scanning Electron Microscopy (Cryo-SEM). A diffusion model was used to quantify the kinetic effects induced by ultrasound on mass transfer process.
Experimental results showed a reduction of drying time with the ultrasonic power, the higher the power applied, the faster the drying kinetic. Thus, a maximum drying time reduction (70 %) was reached when an acoustic power of 90 W was applied. The ultrasonic effect was also observed on the effective moisture diffusivity identified from modeling, which showed a significant (p<0.05) linear relationship with the acoustic power. The main cellular tissue in eggplant is endocarp. This tissue is formed by cells interconnected each other with large intercellular spaces occupied by air, similar to a sponge highly porous. In air dried samples, endocarp cells appeared highly degraded and a compacted tissue may be observed without practically intercellular spaces. However, the combined treatments with ultrasound were less drastic in degradation terms than those only with air. In these samples, the microstructure of the tissue appeared less modified if it is compared to the fresh eggplant.
(1) Interuniversitair Micro-elektronica Centrum vzw, Kapeldreef, Belgium (2) Laboratorium voor Akoestiek en Thermische Fysica, Katholieke Universiteit Leuven, Belgium (3) Afdeling Moleculair Design en Synthese, Katholieke Universiteit Leuven, Belgium (4) Departement Metaalkunde en Toegepaste Materiaalkunde, Katholieke Universiteit Leuven, Belgium
ABSTRACT
In semiconductor manufacturing, megasonic cleaning may play an important role for nano-particle removal, if the underlying physical processes are thoroughly understood. As shown in recent years, acoustic cavitation is the main contributor to surface cleaning. Crucial parts of the overall cleaning process are therefore the actual nucleation of cavitation bubbles in the bulk of the liquid and related physical processes that might enhance, suppress or simply accompany bubble nucleation. One process is the enhanced buildup of temperature gradients in ultrasonic fields due to nucleation onset and the accompanied collapse of the nucleating bubbles. There, the highly non-linear oscillatory behavior of resonant cavitation bubbles might enhance dissipation of acoustic energy and heat transfer from the vicinity of the collapsing bubble to the surrounding liquid. The resulting convection and its relation to the nonlinear interactions of the cavitation bubbles with the acoustic field are investigated in the present study with the help of qualitative and quantitative methods such as Schlieren-imaging and Sonoluminescence Measurements. These methods provide the advantage, that they do not disturb the nucleation process and are applicable to measurements of both pressure and temperature gradients. In order to identify the contribution of the strongly nonlinear bubble-wall oscillations to the enhanced heating of the bulk liquid, the results are correlated to acoustic noise spectra recorded in parallel with the help of a hydrophone.
(1) Particulate Fluids Processing Centre, University of Melbourne, Victoria, Australia (2) CSIRO Food and Nutritional Sciences, Victoria, Australia
ABSTRACT
The shear intensity of power ultrasound makes it a competitive technology for the generation of a range of particulate fluids. In our recent work, we have shown that this approach can be competitive with microfluidization for the production of oil-in-water emulsions with average droplet diameters as low as 40 nm. The small size of these droplets means that the emulsion is transparent to the eye. The minimum droplet size of 40 nm, was only obtained when both droplet deformability (surfactant design) and the applied shear (equipment geometry) were optimal. Results at atmospheric pressure fitted an expected exponential relationship with the total energy density. However, we found that this relationship changed when an overpressure of up to 400 kPa was applied to the sonication vessel, leading to more efficient emulsion production.
In comparable work, we have shown that ultrasound can also be used to generate dense foams with small cell size. Significant decreases in bubble sizes within the bulk solution were recorded at modest acoustic power inputs and this translated into a reduction in the cell size. The use of higher power inputs was less effective due to gas bubbles collecting at the antinodes of the acoustic field where bubble coalescence was enhanced. Higher power levels were effective at reducing the size of nascent bubbles produced by the sparger, but this did not translate into a reduction in foam cell size. The relative position of the sparger with respect to the transducer was also found to be important. These results have been applied to enhance the efficiency of foam fractionation; that is the removal of surface active species through foam enrichment. The application of 20 kHz ultrasound was shown to enhance the removal of surfactants and a bioactive agent by between 125 and 320%.
The Particulate Fluid Processing Center, Department of Bio-molecular and Chemical Engineering and School of Chemistry, The University of Melbourne, Victoria, Australia
ABSTRACT
Bubble behaviour in an acoustic field is well understood for simple air-water systems and various models have been developed to predict the bubble motion and rectified diffusion growth. What is less well understood is the cavitation behaviour of bubbles in complex solutions containing surface active materials such as surfactants. To date, models developed for rectified diffusion in the presence of surfactants at various ultrasonic frequencies, significantly underestimate the growth rate measured in experiments. There is also limited available data for rectified diffusion growth rates with surfactants present. Experiments have been conducted to expand the available rectified diffusion growth rates for sodium dodecyl sulphate (SDS) and dodecyl trimethyl ammonium chloride (DTAC) surfactant at various concentrations and acoustic driving pressures. Experiments for solutions of SDS containing sodium chloride salt have also been conducted, with results suggesting a further enhancement of the rectified diffusion growth rate for a given surfactant concentration. These results will be compared with existing models and several approaches will be identified to bring experimental data closer to the theoretical predictions.
Chemical Engineering Department, Vishwakarma Institute of Technology, Pune, India
ABSTRACT
Ultrasound is employed in several areas that include the synthesis and processing of several nanomaterials, polymerization reactions, sonocrystallization and waste water treatment. The following applications of ultrasound are discussed. Ultrasound assisted synthesis of bentonite nanoclay and the combined effect of sonication and adsorption of phenol for water treatment. It is found that exfoliation of clay materials into nanometric platelet occurs due to ultrasonic irradiation. Equilibrium reaches in a short period of time and higher adsorption can be achieved by optimizing catalyst loading and cavitation conditions. The effect of ultrasound on the nucleation and growth periods of crystallization will be discussed. The crystallite size for carbonation reaction consists of multiple phases. The functionalized calcite has commercial value; it is possible to synthesize by hydrodynamic cavitation and acoustic cavitation which will also be discussed. The synthesis of p(methyl methacrylate/butylacrylate) by insitu miniemulsion process in the presence of functionalized inorganic particles using ultrasound will also be discussed. It is found that product has superior property performance in comparison with conventional compounding process.
(1) Department of Mechanical Engineering, Heriot-Watt University, Edinburgh, UK (2) Department of Design, Manufacture and Engineering Management, University of Strathclyde, Glasgow, G1 1XJ, UK
ABSTRACT
The manufacture of polymeric solid foams with an engineered distribution of mechanical properties has been possible by irradiating ultrasound on a viscoelastic reacting mixture. Structures with a heterogeneous pore size distribution offer great advantages when compared to homogeneous distributions in many applications that require strength with minimal amount of material (e.g. airplane wings). However, manufacturing solutions lag well behind the demand of these components. Sonication has been recently demonstrated as a potential technique that can support these materials fabrication processes. The mechanism involves bubble growth in a polymeric melt undergoing foaming that is influenced by the ultrasonic environment (i.e. sound pressure, frequency and exposure time). Once the foam solidifies, the final porosity distribution within the solid reflects the sonication conditions. In order to obtain sophisticated distributions of porosity and porosity gradients, fine control on the acoustic pressure field has to be achieved. This paper presents an attempt to correlate acoustic pressure to porosity gradation by comparison of simulated acoustic field and engineered porosity analysed on experimental polyurethane foams. COMSOL Multiphysics™ has been used to recreate the process in the irradiation chamber; and the acoustic fields, both in the environment and the reaction vessel, have been simulated and validated. Results from this study will allow the optimisation of the manufacturing process of functionally tailored materials with the sonication method.
(1) Dairy Innovation Australia Ltd., Werribee, Victoria, Australia (2) School of Chemistry, Department of Chemical and Biomolecular Engineering, University of Melbourne, Victoria, Australia
ABSTRACT
High intensity low frequency ultrasound was used to process and improve the functional properties of certain dairy ingredients using industry-scale ultrasonic reactors. A continuous sonication process operating at a frequency of 20 kHz capable of delivering up to 4 kW of power with a flow-through reactor design was used to treat dairy ingredients at flow rates of up to 360 L/hr. Dairy ingredients treated with ultrasound included reconstituted whey protein concentrate and whey protein- and milk protein retentates. The sonication of aqueous solutions for less than 1 minute and up to 2.4 minutes with an applied energy density between 28 and 300 J/ml resulted in a significant reduction in the viscosity of dairy solutions containing high solids (up to 54% (w/w)). The viscosity of aqueous dairy ingredients treated with ultrasound was reduced by between 6% and 50% depending greatly on the composition, processing history, acoustic power and contact time. When sonication was combined with a pre-heat treatment of ≥ 80C, the heat stability of the dairy ingredients was significantly improved. The effect of sonication was attributed mainly to physical forces generated through acoustic cavitation as supported by particle size reduction in response to sonication. A notable improvement in the gel strength of sonicated and heat coagulated dairy systems was also observed. Overall, the sonication procedure for processing dairy ingredients has potential economical benefits for the dairy industry by improving process efficiency and development of new products and ingredients.
Department of Physics, Qom branch, Islamic Azad University, Qom, Iran
ABSTRACT
High intense Laser interaction with matter is described as new phenomena during the last twenty years. Femtosecond time duration is interested to transfer high power in interaction with matter and plasma. Recently, Attosecond technique is a new method that used in the fast atomic scales measurement. In normal condition, Electrons are confined by coulomb potential in atom. By using the high intense femtosecond laser field, a time-dependent force on the bound electron could be exerted. In the large produced force, the electron can tunnel from the atom. This splits the electron and the attraction of the negative electron to the positive ion rapidly decreases so electron escapes the ion. In fact, electrical field of the light pushes the wave packet away from the ion firstly, but by reversing the field direction, the force cause to come back the electron. The times of recollision could be synchronized by laser pulse and it can be of attosecond precision. In this work we have described the energy difference of recoiled electron before and after collision. The EM Attosecond generated pulse has investigated in frequency domain. It is demonstrated major effective parameters for control of shape and intensity in attosecond phenomena.
Department of Physics, Qom Branch, Islamic Azad University, Qom, Iran
ABSTRACT
Terahertz applications such as biological sensing, imaging, surface chemistry and high field condensed matter studies cause to propose the source of this radiation over the last decade.
Various schemes based on optical rectification, Quantum cascade intersubband, photoconductive antenna, Varactor Frequency Doublers, synchrotron radiation and est. are used to generate terahertz radiation. Electromagnetically Induced Transparency in an Ideal Plasma was demonstrated by Harris. Recently, the terahertz radiation via an electromagnetically induced transparency at ion acoustic frequency region with laser-produced dense plasmas is reported by Nakagawa. In this work, we consider the interaction of a high intensity laser pulse with dense plasma. By using the ion hydrodynamic equations and the Maxwell's equations, ion density variation is obtained that could be demonstrated terahertz power. This new achievement can optimized the power of generated Thz in ion acoustic frequency region.
Czech Technical University in Prague, Faculty of Electrical Engineering, Prague, Czech Republic
ABSTRACT
Environmental applications, such as volatile organic compounds decomposition, destruction of nitrogen oxides, or ozone generation, utilize different chemical reactions. The efficiency of these reactions depends, among other things, on the temperature, on the residence time (mixing of reactant medium) and also on the pressure in the reaction volume. Increase of the residence time and pressure in the reaction volume can be achieved by application of power ultrasound. At the same time, many reactions can be enhanced by ionization of the reactant medium, which is most frequently performed by electrical discharges. The synergy of power ultrasound with electrical discharges therefore opens new and unique perspectives for many applications. The simplest and most reliable way to meet these requirements is through the combination of an acoustic resonator with non-thermal electrical discharges.
The discharge power and volume are the important factors proportional to the processed (destructed, decomposed) medium quantity. To increase the discharge volume and the discharge current-voltage range, and at the same time, to prevent discharge transition into spark, a stabilized system of multi-needle electrodes situated against the plane electrode is often used. Application of airflow through all of the needles and individual ballast resistors for each needle stabilizes and electrically separates the individual discharges at needle to plane electrodes channels. This arrangement requires a lot of space and good insulation of the feed wiring. We describe a new resonator setup with the negative needle multi-electrode and one flat positive electrode, placed in the node of acoustic pressure of the resonator. The great advantage is that only one common resistor for all needles is used. The discharge is stabilized even without airflow through the needles: it becomes more uniform and the discharge volume is substantially increased. We explain the mechanism of the discharge phenomenon in a high-intensity acoustic field as follows; first, the gas particles are swung on the pressure node plane; and second, according to Meek's criterion, the electric discharge phenomenon is easy to obtain in the lower static pressure. The streamers were followed to trace the lower pressure channels produced by the sound pressure.
(1)School of Chemistry, University of Southampton, Highfield, Southampton, UK (2)Institute of Sound and Vibration Research, University of Southampton, Highfield, Southampton, UK
ABSTRACT
A set of experiments designed to characterize an ultrasonic piston like emitter and a cylindrical ultrasonic reactor are presented. These include electrochemical, acoustic and imaging of the systems deployed. An electrochemical technique, that can detect the erosion caused by single inertial cavitation events within these systems, is reported and the comparative results discussed. The technique relies on an opto-isolated measurement of erosion/corrosion of the electrode surface employed. Mapping of the systems is combined with acoustic and luminescent imaging. In addition, high-speed imaging within the different environments studied is used to support the conclusions drawn.
Dipartimento di Scienze e Tecnologie Chimiche, Università di Roma Tor Vergata, Roma, Italy School of Chemistry, The University of Melbourne, Melbourne, Victoria, Australia
ABSTRACT
Ultrasonic synthesis and characterization of microbubbles and microcapsules using thiolated polymethacrylic acid and lysozyme was carried out. The protein and polymer shells are stabilized by interchains disulfide cross-linking induced by the radicals and superoxides, generated during the sonolysis of water. The remarkable stability of microbubbles is ascribed to the thick (500 nm) and compact cross-linked polymer shell. The mechanism of microbubbles/microcapsules formation is illustrated. Structural and functional properties of microbubbles/microcapsules have been investigated. It was found that the circular dichroism spectrum in the far-ultraviolet region of lysozyme microbubbles changed to that characteristic of a β-structure from the native α-helix rich spectrum. Lysozyme shell is partially arranged in fibrils with β-sheet internal structure as shown by the maltese-cross pattern obtained using cross-polarization microscopy and congo red staining. Studies on the biodegradability of the lysozyme microbubbles/microcapsules using proteolysis with digestive enzyme (trypsin) demonstrated the irreversible protein denaturation and crosslinking on microbubbles shell. In addition, we have developed a new technique where the high frequency (355 kHz) treatment of microbubbles can be used to reduce the size and narrow the size distribution of the microbubbles. Microbubbles loaded with an anti-cancer agent conventionally used for the clinical treatment of malignancies (e.g., doxorubicin) and coated with MRI contrast agent (ferritin) can be potentially used as contrast agents for the double-modality MRI and US.
(1) Impulse Devices, Inc., Grass Valley, CA, USA (2) Applied Physics Laboratory, U. Washington, Seattle, WA, USA (3) Nat. Center for Phys. Acoustics, U. Mississippi, Oxford, MS, USA (4) Boston University, Boston, MA, USA
ABSTRACT
It is well known that cavitation collapse can generate intense concentrations of mechanical energy, sufficient to erode even the hardest metals and to generate light emissions visible to the naked eye (sonoluminescence, SL). Considerable attention has been devoted to the phenomenon of "single bubble sonoluminescence" (SBSL) in which a single stable cavitation bubble radiates light flashes each and every acoustic cycle. Most of these studies involve acoustic resonators in which the ambient pressure is near 0.1 MPa (1 bars), and with acoustic driving pressures on the order of 0.1 - 0.4 MPa. We'll describe a high quality factor, spherical resonator capable of achieving acoustic cavitation at ambient pressures in excess of 30 MPa (300 bars). This system generates bursts of violent inertial cavitation events lasting only a few milliseconds (hundreds of acoustic cycles), in contrast with the repetitive cavitation events (lasting for many minutes) observed in SBSL. Cavitation observed in this high pressure resonator is characterized by flashes of light with intensities up to 1000 times brighter than SBSL flashes, as well as with spherical shock waves with amplitudes exceeding 30 MPA (300 bars) at the resonator wall. Computer simulations give estimates of the shock wave amplitude at the time of formation near the collapsing bubble to be around 1-10 TPa (10-100 Mbars). Both SL and shock amplitudes increase with static pressure. The possibilities of reaching these extreme conditions in current experiments will be discussed.
(1) Department of Applied Physics and Chemistry, The University of Electro-Communications, Tokyo, Japan (2) Department of Physics, Meiji University, Tokyo, Japan
ABSTRACT
Recently, extremely intense sonoluminescence in sulfuric acid was discovered in both single bubble sonoluminescence (SBSL) and multibubble sonoluminescence (MBSL). One of the most important discoveries in sulfuric acid for sonochemistry are the emissions from electronically excited metal atoms, because the emissions have given further insight of how the non-volatile metal cations get heated in a collapsing bubble. In this study, we have examined alkali-metal emissions during MBSL in sulfuric acid with sodium sulfate at 24 kHz and 150 kHz, comparing with those in aqueous solutions. The observation in 1 M sodium sulfate sulfuric acid solution at 28 kHz and 150 kHz under Ar revealed that orange emissions, which were confirmed excited sodium atom emissions at 589 nm, were observed in different spatial locations from blue-white emission only at 28 kHz. By stroboscopic observation, the sodium atom emission seemed to occur when a large bubble ejected tiny bubbles at the positions toward a pressure node after bubble coalescence around a pressure antinode. The intensity of sodium atom emission in sulfuric acid increased at lower frequency, which was the opposite tendency to the water case. Comparing a high resolution spectrum of sodium atom emission in sulfuric acid with that in water, the width of the spectra were almost the same, except for something superimposed in water case. The estimations of the temperature and pressure inside the bubbles with sodium atom emissions at 28 kHz and 150 kHz were 1900 K and 100 atm and 2200 K and 150 atm, respectively.
Department of Physics, Meiji University, Kawasaki, Japan
ABSTRACT
Sonoluminescence (SL) from alkali-metal salt solutions has been studied because the emission mechanism from non-volatile alkali-metal ions has been interested. We measured multi-bubble SL spectra from KCl solutions saturated with Ar, Xe and He gases at temperatures in the range of 15 - 40 ℃ at the frequency of 148 kHz. For Ar-saturated solutions, the spectral line width of K atom emission, which broadened asymmetrically to the red side, was independent of temperature whereas the K-atom line intensity decreased with increasing temperature. These results show that an amount of water vapor does not affect on the K-line width but on the line intensity. The line width and intensity depended on the degassing procedure. We also observed in some ultrasound-irradiation condition that each K-atom doublet has two separate peaks: one is unshifted and the other is shifted and broadened. The results for Xe-saturated solutions clearly indicated that the spectrum of K atom emission is composed of two peaks, unshifted narrow line and shifted broadened line. In contrast to the cases of Ar and Xe saturation, we observed in He-saturated solutions the symmetrically-broadened doublet of K-atom emission, which are shifted to blue side by 0.35 nm. These results strongly suggest that the excited K atoms are perturbed by rare gases inside bubbles. The rare-gas effect observed is in good agreement with those of gas-phase spectroscopy. We conclude that K atom emission occurs in gas phase inside bubbles. An origin of unshifted narrow peak should be investigated in future.
(1) Department of Anatomy, School of Medicine, Fukuoka University, Fukuoka, Japan (2) Tokiwa Science Corporation, Fukuoka, Japan
ABSTRACT
At room temperature, formaldehyde (FA) evaporates into a colorless gas with irritating odor. Considering the health hazard from FA, Japan implemented in March 2008 a new guideline lowering FA vapor limit to 0.1 ppm. Before this new guideline, 0.5 ppm has been considered the safety limit by most industries. The objective is to design a portable device that can effectively neutralize FA vapor. A bottle of 37% FA was placed inside an enclosed 70 x 50 x 78 cm cardboard box and opened for 2 min. The inner side of the box was sealed with vinyl cellophane to avoid the escape of vapor. A small fan in the box allowed even distribution of vapor inside. Then the FA concentration (in ppm) was monitored for up to 45 min. This data served as the control, then used to compare with other experimental groups. In experiment 1, an Infutrace (IFR of Pharma Corporation), a commercially available FA neutralizer agent, was sprayed inside the box and the other is when Infutrace was nebulized using an ultrasound device. Experiment 2 used pork meat dipped into 15% FA for 24 hrs, then exposed into open space to allow FA evaporation. The nebulized Infutrace was allowed to mix with the FA evaporation before being pumped in for measurement of FA concentration.
The concentration of vaporized FA in an enclosed box reached 0.35 ppm. Spraying with Infutrace reduced the FA concentration to 0.15 ppm, while nebulized Infutrace reduced the concentration to 0.04 ppm. In the pork meat experiment, the FA vapor reached 20 ppm, which was effectively reduced to 0.100 ppm by ultrasound-nebulized Infutrace. This is equivalent to a 99.5% FA removal rate. The result show that a newly designed portable ultrasound device to deliver Infutrace can effectively neutralize vaporized FA. This method is not expensive compared to an expensive ventilation-filtration system that is usually installed in large facilities and is more likely more cost effective.
Bundesallee 100, Braunschweig, Germany
ABSTRACT
For the design and optimization of technical ultrasound applications, empirical methods are applied which rely on experience rather than measurements or models. This complicates the objective description of important output quantities such as, e. g., the cleaning quality or the conversion of sonochemical reactants as well as quality management in manufacturing processes. No international standards exist that reliably describe measurement and quantification methods for the output of cavitation technologies.
This paper presents different techniques for obtaining quantitative cavitation parameters based on practical methods and it shows the relations between these parameters. Hydrophone measurements were used to describe the sound field as the driving force of cavitation and different spectral parameters were obtained from a spectral analysis. The erosive effect of cavitation is investigated by an aluminium foil technique, including a newly developed image processing technique which can distinguish between holes, ablation, and dents in the foil. The final result is a quantitative erosion parameter. For the description of sonochemical effects, the reduction of iodine is used as a model reaction and a spatially resolving detection method is presented. Finally, sonoluminescence was detected also with spatial resolution for comparison. Measurements of all these parameters in ultrasonic cleaners are presented that show that different properties and effects of cavitation can be described. The relations between the parameters are investigated by means of a multivariate data analysis and correlation values are discussed. The reproducibility of the measurements is, at least for the sound field parameters, better than 30% and it can be shown that a quantitative description by the parameters presented is favourable for applications in industry.
V.I.Il'ichev Pacific Oceanological Institute of FEBRAS, Vladivostok, Russia
ABSTRACT
A new model of the shape of Na D-line in sonoluminescence spectra is presented. The model is based on a hypothesis that a complex structure of the line (the shift, broadening, asymmetry, presence of parent peaks) is formed by the spectra radiated at various densities of the perturbing environment. The interval of the density values during the emission is obtained by fitting model spectra to experimental data. The best fit is found for the case when a rate of density growth is increased to the moment of full collapse. The estimated density interval when emission occurs is 25-400 Amg for the case of NaCl water solution in argon at 22 kHz. Assuming that the intrabubble temperature is 2000 K, the pressure is about 3000 bar at 400 Amg. It is also proposed that the only origin of narrow parent Na D-line peaks observed in the experimental spectra is "low-density" emission. The sonoluminescence spectra of K and Li are studied within the model. It is observed that increasing ultrasound frequency leads to displacing of the boundaries of the density interval to lower values.
Department of Physics, Brahmanand Post Graduate College, Rath, Hamirpur, Bharat, India
ABSTRACT
Nanofluids are stable suspensions of nanoparticles in a liquid. In order to avoid coagulation of the particles, the particles must be coated with a second distance holder phase which in most cases, consist of surfactants that are stable in the liquid. An important application of nanofluid containing nanoparticles is as a coolant, since the addition of only a few volume percent of nanoparticles to a liquid coolant and significantly improves its thermal conductivity. The term nanotechnology has also been used more broadly to refer to techniques that produce or measure features less than 100 nanometers in size; this meaning embraces advanced micro fabrication and metrology. Nanotechnology based on molecular manufacturing requires a combination of familiar chemical and mechanical principles in unfamiliar applications.
Metal nanoparticles can be used in various application fields, such as optical filters or nanolithography. Metal nanoparticles are also widely applied in catalysis because of the high surface to volume ratio. Copper nanoparticles have been synthesized by the flow-levitation method and coated with carbon-and-hydrogen films through the hollow-cathode glow discharge. The uncoated and coated Cu nanoparticles have been analyzed by transmission electron microscopy, X-ray diffraction, and infrared absorption. Their size, dispersion, and coating thickness have been examined. The addition of copper nanoparticles did not change the dependence of heat transfer on acoustic cavitations and fluid sub cooling. Ultrasonic velocity is the speed in which sound propagates in a certain material. It depends on material density and elasticity. It is related in a simple way to the various coefficients of compressibility, isentropic, isenthalpic and isothermal, hence the importance of its measurement and modeling in temperature and pressure ranges are widely used. In this work we have measured the ultrasonic velocity at different temperature and frequencies of 15 nm copper fluid using Interferometer technique.
Department of Chemical and Environmental Engineering, Faculty of Engineering, The University of Nottingham Malaysia Campus, Jalan Broga, Selangor, Malaysia
ABSTRACT
The preparation of poorly water soluble drugs in the form of nanoemulsion is of increasing interest in the current drug delivery system, as it appears to be an excellent drug vehicle to facilitate the delivery of hydrophobic pharmaceutical ingredients into gastrointestinal tract, thereby improving the drug oral bioavailability. In the present study, an energy efficient cavitation method has been utilized to prepare a well- optimized formulation of nanoemulsion encapsulated with a variety of pharmaceutical ingredients such as Aspirin, Curcumin, Ganoderic acid and Polysachharide of Ganoderma. From the results, it has been clearly observed that cavitation is a powerful yet promising approach in the efficient production of nanoemulsions encapsulated with different active ingredients with smaller droplet size as well as with narrow distribution. It has also been confirmed that different operating parameters control in order to get an optimum and minimum droplet size. Concerning the stability of each formulation, particle size distribution (PSD) and polydispersity index (PDI) of each sample were remained approximately unchanged under room temperature after two months storage. In addition, Zeta potential measured by Zetasizer was approaching 0mA in all the developed formulation.
Osaka Prefecture University, Osaka, Japan
ABSTRACT
Various types of methods for the synthesis of metal nanoparticles have been actively researched to obtain functional nanomaterials such as catalysts and sensors. In this study, sonochemical reduction methods for the synthesis of metal nanoparticles were investigated in aqueous solutions. It was found that the rates of reduction of metal ions are strongly dependent on the types and concentration of organic additives. In addition, various parameters such as ultrasound intensity, ultrasound frequency, dissolved gas, etc. affected the rate of reduction of metal ions. To control the size and shape of metal nanoparticles, a seed mediated synthesis method for the synthesis of metal nanoparticles was also investigated under ultrasonic irradiation.
Faculty of Health and Life Sciences, Coventry University, UK
ABSTRACT
Water can contain many different types of pollutants of a chemical and also of a biological nature and therefore must be treated prior to subsequent use. Textile effluents contain waste dyestuffs which are sometimes difficult to treat by conventional processes which often involve biological and chemical methods. Effluents from many industries, such as pulp bleaching by chlorine, hydrolysis of herbicides and the oil refining industries, contain aromatic chlorophenols. Due to their toxicity, the treatment efficiency of chlorophenols by general biological treatment systems is normally low and fairly inefficient. Other chemicals known as endocrine disruptors have also been found within our water systems. These chemicals target hormones and tissues in the body and are thought to have long term effects. Pollution by various bacterial organisms and algae is also become more common place with large scale problems occurring worldwide. Treatment using conventional methods can be difficult and in some cases ineffective. In order to treat water more efficiently advanced oxidation processes must be employed. Laboratory scale application of advanced oxidation processes (AOPs) using ultraviolet light and oxidizing chemicals such as ozone, hydrogen peroxide and Fenton's reagent have been found to be extremely effective for chemical treatment and such processes are based on the in situ generation of very reactive free radicals such as the hydroxyl radical (.OH). Ultrasound is considered to be such an advanced oxidation process. Its ability to generate hydroxyl radicals at low temperatures has generated interest in the treatment of dye decolourisation and also in the oxidation of chlorophenols. In this study the decolourisation of several dyes in aqueous solution was investigated in the presence and absence of sonication at varying frequencies with the most effective being 850 kHz. Treatment of aromatic chlorophenols also produced degradation at 850 kHz. However biological systems appear to operate in a different manner with the most effective frequency being 20 kHz which is primary a frequency used for disruption of cells rather than .OH radical generation.
Department of Chemistry, University of Bath, UK
ABSTRACT
The sonochemistry of water based systems is of interest in a large number of areas including pollution remediation, chemical synthesis and safety implications in medical systems. In principle, sonication of water is quite straightforward, giving H· and OH· radicals which can then react further. Additional reaction pathways occur when volatile compounds evaporate into the bubble as it grows. However, there remain many details of the process that are unclear. We have been studying the effects of cavitation in aqueous solution using a variety of methods including detection and quantification of the radical intermediates as well as monitoring sonoluminescence emission. These methods have been applied to a range of solutions used in industrial processes, including emulsions. We have also applied the methods to a number of medical and dental ultrasound instruments. Some results for each area will be described.
In particular, in collaboration with colleagues, we have investigated how the experimental conditions affect changes in sonoluminescence and in acoustic emission. We have shown that the rate of radical production, the acoustic emission arising from the collapsing bubble and the results from luminescence quenching experiments show good agreement. In addition, some unexpected effects were noticed when using ultrasound with two different ultrasound set-ups; a 20 kHz horn and a 515 kHz emitting transducer. A possible model to explain some of these results has been proposed suggesting that the type of cavitation is different in the two situations in terms of the proportion of stable and transient bubbles that exist. Some recent experiments aimed at clarifying the situation will be described.
(1) Mesoscale Chemical Systems, MESA+ Research Institute, University of Twente, The Netherlands (2) Physics of Fluids, University of Twente, The Netherlands (3) Department of Mechanical Engineering The Johns Hopkins University, USA
ABSTRACT
The sonochemical generation of radicals by applying ultrasound in the medium kHz regime (100 - 500 kHz) at pressures up to 500 kPa to gas bubbles embedded in pits with 5-50 μm radius micromachined in silicon substrates was studied. The gas bubbles entrapped in the pits are stable for hours, and their oscillation leads to the ejection of micrometer sized bubbles. By using luminol as a chemiluminescent dye to visualize radical production by cavitation, we could demonstrate that these expelled microbubbles are chemically active. To quantify this activity, the product of the reaction of terephthalic acid with the hydroxyl radicals generated in the sonochemical process was measured by a fluorescence method. The results show an increase in total energy efficiency, expressed in the amount of radicals generated per unit power injected to the system, of more than 50 %, compared to an experiment without the surface bubbles.
Department of Physics, Meiji University, Kawasaki, Japan
ABSTRACT
Multibubble sonoluminescence pulses of Na and continuum emissions were observed from NaCl solution in ethylene glycol saturated with argon or xenon using a system of a photomultiplier with 0.78 ns rise time and 4G sampling/s oscilloscope at several ultrasonic frequencies. The continuum emission pulse showed single peaks with the width being 1.4 ns that is nearly equal to an instrumental width. The Na emission pulse showed multiple-peaks with each peak width being 1.4 ns and each peak interval being 2-3 ns at the frequency of 28 kHz. At 68 kHz the Na emission pulses showed single peaks with the width of 1.4 ns, and no multiple-peaked pulse was observed. This result disagrees with that by Arakeri et al. who reported a Gaussian shape pulse with the width ranging from 10 to 165 ns. High-speed photography of bubble dynamics observed in this solution suggests that the multiple-peaks are due to the superposition of single peaks which are resulted from many daughter bubbles fragmented from very large bubbles which phenomenon may be peculiar to a viscous liquid.
(1) Nuevos Desarrollos Tecnológicos en Electroquímica: Bioelectroquímica y Sonoelectroquímica, Departamento de Química Física e Instituto Universitario de Electroquímica, Universidad de Alicante. Ap., Spain (2) Centre RAPSODEE, Ecole des Mines Albi, Albi, France (3) Université de Toulouse; Mines Albi; CNRS, Albi, France
ABSTRACT
Chlorinated compounds have received special attention as pollutants due to their wide dissemination in the environment. Among them, perchloroethylene (PCE) is a widely used solvent in many areas of the industry and has been reported such as major intermediate in the degradation of other chlorinated compounds but, to our knowledge, there is not an extensive study about the sonochemical degradation of this compound. In the same way, we do not find many electrochemical treatment studies of aqueous solutions of this compound and the sonoelectrochemical approach is now emerging. In this work, we present a comparative analysis of the degradation of aqueous solutions of PCE by sonochemical, electrochemical and sonoelectrochemical treatment, pointing out the advantages and drawbacks of the different approaches, and also the synergic effects of the simultaneous applications of both energetic fields: ultrasound and electricity. Previous to these studies, not only specific researches were carried out in order to develop stable components (anodic materials) under high power ultrasound fields, but also the characterization of the experimental devices used during this study.
The three approaches have been developed at laboratory scale, analyzing the viability of the process from a technical, economical and environmental point of view, in spite of the fact that the sonochemical devices used were non-optimized laboratory systems and the sonoelectrochemical devices were the sonochemical reactors adapted for sonoelectrochemical studies. Figures of merit such fractional conversion, current efficiency, mass balance error, selectivity (or speciation), degradation efficiency, degradation and energetic consumption were defined and determined for the three approaches. In summary, the sonochemical method presents serious deficiencies not only from an environmental point of view but also in the energetic requirements. The electrochemical method presents competitive costs and feasible technically processes (using optimized filter-press reactors) but it does not fit to the environmental requirements. The sonoelectrochemical treatment has provided the best results from the technical and environmental point of view but economical issues must be improved. Further research lines were suggested in the base of the obtained results.
Drittes Physikalisches Institut, Georg-August-Universität Göttingen, Germany
ABSTRACT
Strong ultrasonic fields in liquids can cause cavitation and sonoluminescence, commonly known as multi-bubble sonoluminescence (MBSL). In addition to the all-up intensity of the light emission from MBSL, there is also information hidden in its temporal and spatial distribution. Ideally the whole optical spectrum could be resolved in time and space. To achieve this aim at least partially we use a set-up with an ICCD camera which is triggered with a fixed phase relative to the acoustic driving. This phase coupling allows us to repeatedly expose only short time windows (down to 100 ns) per cycle. To collect sufficient MBSL light, these 100 ns windows are accumulated over many cycles. A delay line allows to shift the window position over the full acoustic driving period. Furthermore, three color filters are used to achieve a rough spectral resolution. This arrangement is used for various cavitation structures and different ultrasonic field parameters. The results show the regions and time periods of MBSL in the spectral filter ranges. One goal is to draw conclusions on the modalities and temperatures of bubble collapse in different spatial regions of the cavitating liquid.
(1) University of Bath, Bath, UK (2) University of Birmingham, Birmingham, UK
ABSTRACT
One application of physical acoustics is the development of ultrasonic dental instruments for cleaning purposes including descalers and endosonic files. The vibration characteristics of these instruments have been measured by scanning laser vibrometry, SLV. One possible contribution to the cleaning processes is cavitation and we have been using sonochemical methods to characterize cavitation around dental instruments. For example, significant amounts of sonoluminescence can be produced at the vibration antinodes of the endosonic files and there is good correspondence between regions of cavitation and large vibration. The cleaning efficacies of the files were studied using an irrigant used in endodontics (sodium hypochlorite) to bleach a dye (Rhodamine B). Results showed that ultrasound accelerates the rate of degrdation of the dye. For a range of different file shapes and sizes, there was a correlation between sonoluminescence and Rhodamine B degradation although the chemical effects of each files depended on their design. A comparison of the chemistry with the SLV results should allow the optimization of cavitation production along the endosonic files by modifying their shape to increase the cleaning efficacy for benefit in treatments.
National Institute of Advanced Industrial Science and Technology (AIST), Japan
ABSTRACT
The ultrasonic cavitation bubble can provide the extreme conditions at the interior with high temperature and high pressure. Under the condition, water is easily decomposed and oxidants such as hydroxyl radicals, hydrogen peroxide and ozone are created. At the interface of the bubbles, these oxidants react with chemicals such as luminol, and light is emitted in a process known as sonochemiluminescence (SCL). Chemical reactions involving acoustic bubbles are referred to as sonochemical reactions. The SCL intensity initially increases as the acoustic amplitude increases. At higher acoustic amplitude, the SCL intensity dramatically reduces. It is frequently observed that at this time a liquid surface vibrates apparently due to an action of acoustic radiation force. A detailed study on an influence of amplitude of liquid-surface vibration on the sonochemical-reaction efficiency that has not been reported so far is awaited to clarify a mechanism of the reduction at high acoustic amplitude.
In this study, the influence of surface vibration on the SCL intensity by pulsed ultrasound of 151 kHz is investigated mainly through the optical measurement of the vibration amplitude. The pulsing operation inhibits the generation of degassing large bubbles that restrict the efficient spatial region for sonochemical reaction. It is shown that the vibration amplitude of liquid surface becomes marked gradually at pulsed ultrasound as an applied power to a transducer increases. At this time the SCL intensity increases and reduces apparently after showing the peak. It is found that the SCL intensity at pulsing with high amplitude reaches almost zero if an instantaneous position of liquid surface becomes close to or overcomes the fourth of wavelength of an ultrasound. This condition for the liquid surface vibration will give us a limit of establishment of resonant standing wave effective for sonochemical reaction.
(1) Yamagata University, Yonezawa, Yamagata, Japan (2) Asahikawa Medical College, Asahikawa, Hokkaido, Japan
ABSTRACT
The light emission measurement of sonochemical luminescence using an aqueous solution of luminol (5-amino-1,2,3,4-tetrahydrophthalazine-1,4-dione) is thought to be useful as a method to predict multi-bubble cavitation behavior because the time from the formation of active bubbles to light emission is very short compared to other methods using a chemical reaction. From this light emission phenomenon, we intended to predict the process of growth and dissipation of cavitation bubbles. To ascertain the optimum sample conditions for our system, we first measured luminescence intensity while changing the luminol concentration, sodium carbonate concentration and liquid temperature. The intensity of luminescence was highest when the sodium carbonate concentration was 450 mM. When the sample temperature was varied from 3 to 50 degrees, the intensity of luminescence was highest between 20 and 30C. These were taken as the optimum conditions for the system and an experiment was conducted using pulse burst waves. In the experiment, the pulse duration time was set in the range from 0.3 to 8.0 milliseconds and the interval duration time was set in the range from 0.01 milliseconds to 10 seconds to measure the intensity of luminescence using pulse burst waves. It was ascertained that, for a pulse duration time of 4.0 milliseconds or less, the intensity of luminescence decreases to about 90 percent even if the pulse train has a duty ratio of 1:0.02, which is very close to a continuous wave. It was found that the interval duration time in such a state is fixed to 0.1 milliseconds regardless of pulse duration time. Based on these results, we identified the time of activation (the time taken by bubbles to become active during ultrasound irradiation) and the time of deactivation (the time taken by the activity to decay when the irradiation was stopped).
National Institute of Advanced Industrial Science and Technology (AIST), Japan
ABSTRACT
Acoustic cavitation is the creation and collapse of bubbles in liquid irradiated by a strong ultrasound. Created bubbles pulsate and radiate acoustic waves. The radiated acoustic waves are called acoustic cavitation noise. The frequency spectrum of acoustic cavitation noise consists of the driving ultrasonic frequency, its harmonics, subharmonics, ultraharmonics, and broad-band noise. While the broad-band noise has been widely utilized to measure the intensity of acoustic cavitation, its origin is still under debate. In the present study, numerical simulations of acoustic cavitation noise have been performed taking into account the effect of the temporal fluctuation in the number of bubbles. The temporal fluctuation is mainly caused by fragmentation of bubbles. The amplitude of the temporal fluctuation is assumed to be inversely proportional to a lifetime of bubbles which is estimated by the numerical simulations of the shape oscillations of bubbles. It has been shown that the temporal fluctuation in the number of bubbles results in the broad-band noise. Although non-periodic stable pulsation of bubbles also results in the broad-band noise, its contribution is negligible at least under the experimental condition of Ashokkumar et al. in which the ultrasonic frequency was 515 kHz. It is concluded that transient cavitation results in the broad-band noise, while stable cavitation does not when transient and stable cavitation are defined by a lifetime of bubbles.
Dept. of Mechanical Engineering, University of Hawaii-Manoa, Honolulu, HI, USA
ABSTRACT
Targeted ultrasound contrast agents are encapsulated microbubbles with conjugated ligands on their shell which allow them to bind to specific diseased regions. The agents oscillate nonlinearly about their equilibrium radii producing nonlinear scattering signatures that facilitate novel detection and imaging techniques. They provide molecular imaging capabilities to diagnostic ultrasound and a potential means for the treatment for therapeutic ultrasound. These molecular imaging capabilities show promise in a variety of medical applications including the early diagnosis and treatment of cancer, cardiovascular disease and biofilm infections. The latter topic is highlighted as only recently has it been the investigated with respect to targeted ultrasound agent imaging. Currently, there is no established method for in-vivo imaging of biofilm infections which occur when bacteria adhere and form a matrix on an exposed tissue surface. The matrix protects bacteria and is highly resistant to antibiotic treatment in its latter stages of development. Diseases such endocarditis, infection of the heart valve, can be difficult to treat other than with surgical intervention. Targeted ultrasound contrast agents may provide a significant advance in early diagnosis of biofilm infections.
For drug delivery applications, the synergistic effect of targeted agents and ultrasound has been shown to enhance permeability, though the mechanisms are not well understood. Studies measuring real time changes in endothelial cell permeability with an Electrical Cell Impedance System (ECIS) are outlined. The respect role of binding and ultrasound can be examined in a novel and more rigorous manner with this method. Finally, outstanding theoretical and modeling issues related to the modeling of targeted agents are addressed. The dynamics of bubbles in proximity to a boundary and the role the boundaries' material properties are described. Recent developments in acoustic forcing schemes which can enhance the agents' nonlinear resonance responses are also discussed.
(1) IMEC vzw, Leuven, Belgium (2) Katholieke Universiteit Leuven, Leuven, Belgium
ABSTRACT
Acoustic intensity is a very important parameter in high frequency ultrasound cleaning. The intensity variations in a cleaning bath are measured with a raster scanned hydrophone and the results are simulated through the spatial impulse response method. Furthermore, acoustic reflections on the cleaning bath walls are minimized with damping material. The influence of different type of gases (oxygen, argon, nitrogen and carbon dioxide) on the ultrasound cleaning efficiency is investigated. Gas concentrations of oxygen and carbon dioxide are directly measured. The cleaning results indicate that oxygen, argon and nitrogen give comparable cleaning results whereas particle removal with dissolved carbon dioxide gas is completely absent.
(1) National University of Singapore, Singapore (2) Ecole Polytechnic, Palaisecau, France (3) Institute of High Performance Computing, Singapore
ABSTRACT
An oscillating bubble will generate a jet towards a solid surface in its collapse phase. This phenomenon has been known in for example underwater explosions, but it has also been confirmed with much smaller bubble sizes. It has also been shown previously that such a collapsing bubble can pump liquid from one side of a plate with a hole towards another, provided that the bubble and the hole are aligned. The working principle is still the same, i.e. a high speed jet is formed in the collapse phase of the bubble, this jet is directed towards the hole in the plate.
In the current study we will investigate if it is possible to mix fluids with an oscillating bubble. Bubbles are created experimentally with a spark and a laser and high speed camera images are taken of the ensuing bubble and fluid dynamics. An interface is created of two immiscible fluids first. For this purpose a heavy liquid (HFE) is used and water. It appears that the density difference between the two fluids is responsible for the creation of a jet, which will mix the two fluids. The formation of a crown near the HFE water interface is observed, when the bubble is created very close to the surface. This phenomenon appears to be similar to the crown often observed in splashing drops on a layer of liquid. Experiments are also shown with two miscible fluids in order to see if the fluid-fluid interface plays any role. In order to do so, a layer of honey was used with a layer of water. The bubble dynamics and the mixing of the fluids appear to be very similar to the case with immiscible fluids. Finally, some experiments were performed in a microchannel using a laser generated bubble. The resulting flow phenomena are novel with possible underlying flow physics.
(1) Precision and Intelligence Laboratory, Tokyo Institute of Technology, Japan (2) Faculty of Life and Medical Sciences, Doshisha University, Japan
ABSTRACT
Commercialized contrast agents are encapsulated and have surrounding elastic shells. The vibrational amplitude of microcapsules is smaller than that of a free microbubble due to the shell effects. In this paper, the vibration of a single microcapsule with a hard plastic shell in an acoustic standing wave field was investigated. First, the optical simultaneous observation of a microbubble vibration by a high-speed video camera and a laser Doppler vibrometer (LDV) was performed. An acrylic cylindrical observational cell attached to a bolt-clamped Langevin transducer was employed to trap tens-micrometers-size bubbles at the antinode of acoustic standing wave generated in the cell. The bubble vibration at 27 kHz could be observed and the experimental results by the two methods showed a good agreement. This result implies that the vibration of the microcapsule with a hard plastic shell can be also observed by using the LDV.
Microcapsules (microsphere F-80ED, Matsumotoyushi, Osaka, Japan) constructed of PVC (polyvinyl chloride) having a radius distribution approximately 10 to 100 m and an average radius of 50 m were employed. The displacement amplitude of the capsule vibration excited with 37 kPa at 115 kHz was measured to be approximately 100 nm. For comparison, the acoustic radiation force acting to the microcapsule in the acoustic standing wave was measured from the trapped position of the standing wave. The vibrational displacement amplitude of the capsule was estimated from the theoretical equation of the acoustic radiation force, and these two results were in good agreement. The vibrational amplitude of the capsule was proportional to the amplitude of the driving sound pressure. The larger expansion ratio was observed as the capsule close to the resonance condition under same driving sound pressure and frequency.
The University of Melbourne, Victoria, Australia
ABSTRACT
This study uses a theoretical model to investigate the effect of a plane solid boundary on the dynamics of a small group of microbubbles, within which each bubble is assumed to have the same dynamical behaviour. The model is derived from an equation of Keller-Miksis-Parlitz form using an image bubble technique, and is validated by comparison with established experimental results. The resulting system of coupled ordinary differential equations are solved numerically in the case where all microbubbles begin with the same initial conditions and are investigated for a given set of ultrasound parameters. It was found that the solid boundary causes the oscillations of the microbubbles to increase as the distance between the bubbles and the boundary is decreased. Although this effect is very small for the scenarios considered in this study, it was observed that the solid boundary is more influential on the bubbles' dynamics for microbubbles of larger equilibrium radius, and for clusters of a larger number of microbubbles. The bifurcation characteristics of the bubble systems are also investigated, and it was found that a bubble system transitions from order to chaos at lower driving pressure amplitudes when the distance between the microbubbles and the solid boundary is decreased.
(1) Center for Industrial and Medical Ultrasound, Applied Physics Lab, Univ. of Washington, Seattle, WA, USA (2) Indiana University, Indianapolis, IN, USA
ABSTRACT
Microbubbles are currently used in both diagnostic and therapeutic ultrasound. The interaction of microbubbles with the surrounding tissue is important for several reasons. Not only do the constraining effects of the tissue influence microbubble dynamics, but also the bubble dynamics affects the surrounding tissue. Understanding the coupled microbubble/vessel interactions is thus important for applications ranging from diagnostic and molecular imaging to therapeutic ultrasound. This talk will discuss our work with microbubbles in actual ex-vivo tissues. Ultra-high speed microphotography was used to directly observe the interactions of single and multiple microbubbles with microvessels in ex vivo rat mesenteries. The mesenteric vessels were perfused with saline, green india ink (to enhance the contrast of the vessels) and microbubbles. The microbubbles were excited by 2-s long pulses with a center frequency of 1 MHz and peak negative pressures spanning diagnostic and therapeutic levels (1 - 7 MPa). The images were captured with 50-nsec exposures, over the duration of insonation. Image registration was used to identify the specific locations within the vessels that were treated so that these specific regions could be later examined histologically, and under Transmission Electron Microscopy (TEM). The images of microbubbles show a wonderful assortment of behavior, including simple spherical oscillations, bubble-bubble interactions, liquid jetting, translation, and break-up. The tissue response is also very interesting, and includes distension and inward jetting into the lumen. Several important outcomes associated with the interactions have been observed: Tissue jetting into the vessel lumen, which we called invagination, appears to be greater than distension for most cases. Also, liquid jetting through the microbubbles almost always forms away from the nearest tissue boundary, not towards it. Therefore, the strains and stresses associated with localized invagination may be the dominant mechanism for vessel damage, at least early in the rupture process. Although the tissue responds on the same time-scale as the microbubble dynamics (sec), it relaxes back to its original state on the order of milliseconds. Histological examination of the tissue is also very interesting. In many cases, we do not observe damage, even at high pressures. However, when we do observe damage, the damage shows a tearing of the endothelium away from the surrounding muscle, as might occur if the invagination is pulling the endothelium into the lumen.
(1) Key Laboratory of Modern Acoustics, Nanjing University, Ministry of Education, Nanjing, P.R.China (2) State Key Lab of Pharmaceutical Biotechnology, School of Life Sciences, Nanjing University, Nanjing, P.R.China (3) Department of Physics, University of Vermont, Burlington, VT, USA
ABSTRACT
It has been shown that the efficiency of gene/drug delivery can be enhanced under ultrasound (US) exposure with the presence of US contrast agent microbubbles, due to the acoustic cavitation-induced sonopration. However, obstacles still remain to achieve controllable sonopration outcome. The general hypotheses guiding present studies were that inertial cavitation (IC) activities accumulated during US exposure could be quantified as IC dose (ICD) based on passive cavitation detection (PCD), and the assessment of sonopration outcome should be correlated with ICD measurements. Methods: In current work, MCF-7 cells mixed with PEI: DNA complex were exposed to 1-MHz US pulses with 20-cycle pulse length and varied acoustic peak negative pressure (P; 0 (sham), 0.3, 0.75, 1.4, 2.2 or 3.0 MPa), total treatment time (0, 5, 10, 20, 40 or 60 s), and pulse-repetition-frequency (PRF; 0, 20, 100, 250, 500, or 1000 Hz). Four series experiments were conducted: (1) the IC activities were detected using a PCD system and quantified as ICD; (2) the DNA transfection efficiency was evaluated with flow cytometry; (3) the cell viability was examined by PI dying then measured using flow cytometry; and (4) scan electron microscopy was used to investigate the sonopration effects on the cell membrane. Results: (1) the ICD generated during US-exposure could be affected by US parameters (e.g., P-, total treatment time, and PRF); (2) the pooled data analyses demonstrated that DNA transfection efficiency initially increased linearly with the increasing ICD, then it tended to saturate instead of trying to achieve a maximum value while the ICD kept going up; and (3) the measured ICD, sonopration pore size, and cell viability exhibited high correlation among each other. All the results indicated that IC activity should play an important role in the US-mediated DNA transfection through sonopration, and ICD could be used as an effective tool to monitor and control the US-mediated gene/drug delivery effect.
School of Mathematics, University of Birmingham, UK
ABSTRACT
Micro-cavitation bubble dynamics caused by ultrasound has wide and important applications in medical ultrasonics and sonochemistry. An approximate theory is developed for non-linear and non-spherical bubbles in a compressible liquid by using the method of matched asymptotic expansions. The perturbation is performed to second-order in terms of a small parameter, the bubble-wall Mach number. The inner flow near the bubble can be approximated as incompressible at first and second order leading to the use of Laplace's equation, whereas the outer flow far away from the bubble can be described by the linear wave equation, also for first and second order and is shown as a direct problem. Matching between the two expansions provides the model for a non-spherical bubble behaviour in a compressible fluid. A numerical model using the mixed-Eulerian-Lagrangian method and a modified boundary integral method is developed to obtain the evolving bubble shapes. The primary advantage of this method is its computational efficiency over using the wave equation throughout the fluid domain. The numerical model is validated against the general Keller-Herring equation (GKHE) for spherical bubbles in weakly compressible liquids; excellent agreement is obtained for the bubble radius evolution up to the fourth oscillation. Numerical analyses are further performed for non-spherical oscillating acoustic bubbles. When subjected to a weak acoustic wave, bubble jets often develop at the two poles of the bubble surface after several cycles of oscillations. Resonant phenomenon occurs when the wave frequency is equal to the natural oscillation frequency of the bubble. When subjected to a strong acoustic wave, a vigorous bubble jet develops along the direction of wave propagation in only a few cycles of the acoustic wave.
(1) Physics of Fluids, University of Twente, The Netherlands (2) Mesoscale Chemical Systems, MESA+ Research Institute, University of Twente, The Netherlands (3) IMEC vzw, Heverlee, Belgium (4) Department of Mechanical Engineering, The Johns Hopkins University, USA
ABSTRACT
The generation of free (micro)bubbles is an essential yet usually uncontrolled step in numerous applications of acoustic cavitation like e.g. ultrasonic cleaning and sonochemistry. Here, we address this issue using pre-defined cavitation nuclei driven continuously in the medium kHz regime (100 - 500 kHz) at pressures up to 500 kPa. The nuclei consist of stabilized gaspockets in cylindrical micropits (5-50 μm radius) etched in silicon substrates. It is found that an acoustic pressure threshold exists near 100 kPa above which the behavior of the liquid-gas meniscus switches from the well known stable drum-like vibration to bubble expansion outside the pit combined with strong deformations and shape oscillations. The latter eventually resulting in microbubble pinch-off. Just above the pressure threshold a large number of small microbubbles (O(1μm)) are continuously ejected and immediately recaptured by the source bubble. At elevated acoustic pressures the pinch-off mechanism becomes more pronounced resulting in the generation of larger bubbles (O(5μm)) which due to pressure gradients are frequently pushed away from the pit. Surprisingly, the resulting loss of gas generally does not lead to the deactivation of the pit. Due to the proces of gas diffusion the gas volume can regain its initial state thereby enabling another nucleation and microbubble pinch-off event. In this manner controlled and localized acoustic microbubble generation is achieved.
Defence Science & Technology Organisation and University of Sydney Institute of Marine Science, NSW, Australia
ABSTRACT
Gas bubbles in water are among the most efficient sources of sound in nature when they oscillate as monopoles and as such provide the mechanism of sound generation for a diverse range of sources from underwater physical and biological processes. These include most of the natural and some of the introduced sources of sound in the ocean and other water masses. Bubbles also have a high scattering cross section and substantially affect the propagation of underwater sound and enhance the detectability of objects containing bubbles. The underwater sound measured from natural bubbles can be inverted to make estimates of wind speed and rain fall integrated over an area of the water surface and can also provide insight into transfer of gases across the air sea interface. This paper will discuss why bubbles are so effective in these respects, how they are responsible for most of the ambient noise in the ocean through many different sources, and how they are important in extracting information about processes within the ocean and in the vicinity of the ocean surface.
(1) Scripps Institution of Oceanography, UCSD, CA, USA (2) Graduate School of Oceanography, URI, Rhode Island, USA
ABSTRACT
The formation of bubbles is often accompanied by a pulse of sound, driven by the non-equilibrium conditions accompanying bubble creation. The underwater sound of rain and breaking waves, for example, is largely associated with bubble entrainment. Recent studies have shown that for bubbles released slowly from a nozzle, the collapse of the neck of air connecting the bubble to the nozzle immediately prior to detachment can account for the observed pulse of sound. The gross features of the neck collapse can be explained with a simple hydrodynamic model based on the conversion of surface tension energy in the neck into kinetic energy of the fluid surrounding the neck. The driving force behind the neck collapse is the large Laplace pressure drop across the air-water boundary associated with surface tension and the small radius of curvature at the end of the neck. This model has also been applied to bubbles fragmenting in fluid turbulence and found to be largely consistent with experiment. The role of surface tension in exciting breathing mode oscillations of coalescing bubbles and some types of bubbles entrained by falling drops will be discussed. In these cases, the neck of air is replaced by an azimuthally symmetric skirt which, like the neck, is subject to a rapid acceleration due to Laplace pressure forces.
J-PARC Center, Japan Atomic Energy Agency, Naka-Gun, Ibaraki, Japan
ABSTRACT
Multibubble effects on cavitation inception are studied in detail to show that bubble-bubble interaction can change the inception process of cavitation in a variety of ways. In an effort to develop a high-power pulsed neutron source which uses a giant proton accelerator and liquid mercury, we have attempted to use microbubbles to reduce cavitation damage of the mercury vessels caused by proton-induced intense pressure waves. From an off-line experiment, we found that cavitation of liquid mercury is suppressed by injecting a sufficient amount of gas microbubbles into mercury. This observation was the starting point of our multibubble study on cavitation inception. Using a simple multibubble model in which Rayleigh-Plesset type equations are coupled through the bubble-emitted pressure waves, we first showed that microbubbles can in certain cases suppress explosive growth of cavitation bubbles under negative pressure, implying a significant effect of injected microbubbles on cavitation inception. We then performed a more detailed numerical study on cavitation in multibubble cases and found that several different patterns of bubble dynamics, such as competitive growth and interrupted expansion, are possible in the early stage of cavitation. From this detailed study we also found that the instantaneous unstable equilibrium radii of growing bubbles play an essential role in these processes. These findings unveil the complex nature of interacting bubbles under negative pressure.
School of Electrical and Information Engineering, University of South Australia, Mawson Lakes, SA, Australia
ABSTRACT
Acoustic streaming is a fundamental nonlinear phenomenon typically accompanying high frequency vibration in fluids. Known and investigated already by Rayleigh and contemporary scientists, it attracts renewed interest due to the availability of powerful computational tools, advanced photography and precise laser velocimetry instrumentation, which can produce accurate experimental results. Its physical mechanism however is still not clearly understood with the analysis somehow limited by the traditional premises of harmonic analysis, radiation force and wave propagation and reflection with the focus on nonlinear terms of the inertial frame formulations.
Following our earlier analysis of nonlinear effects on rigid particles in a streaming fluid using time domain (TD) finite element (FE) analysis with a moving mesh via ComsolT we present now the modelling of ultrasonic streaming alone. We use state of the art laser velocimetry instrumentation to measure the average velocity of 0.5m latex tracer particles in a 0.003-0.03 ms-1 streaming water insonified in the 1MHz frequency range. We use LabViewT virtual instrument to analyse light scattered at a swarm of particles in the moving fringe of crossed laser beams and find the ensemble particle motion from the frequency spectrum of the signal.
In order to verify the FE modelling results with respect to the streaming velocity, the electric power is monitored at the transducer terminals. Our FE simulation based on Navier-Stokes equation for viscous incompressible fluids does not involve wave propagation and radiation but is capable of presenting the transient development of streaming, effects of boundaries and of the character of the ultrasonic source. Our investigation shows that streaming is neither implied by a time-varying topology nor associated with the asymmetry or even with the movement of the source or the fluid surface. Surprisingly, the streaming velocity is increased by making the enclosure fully symmetrical or by using a 'bodiless' distributed pressure source. Observations reveal not only evolving vortices but a more complex character of streaming than generally shown, including unsteady and reversed velocity streaming. Animated plots reveal transient development of a volume of streaming medium progressively 'propagating' away from the source and following a self contained volume of a pressure 'valley' travelling ahead of it and apparently attracting its motion.
(1) University of Melbourne, Australia (2) CSIRO, Australia (3) University of Sevilla, Spain (4) Institut de Mécanique des Fluides de Toulouse, Université de Toulouse, France
ABSTRACT
Numerical and experimental results are presented on the natural emission of sound by bubbles. High-amplitude sound is generally emitted during events such as bubble pinch-off from an orifice, fragmentation, or coalescence: cases when there are brief, extreme distortions to the gas-liquid interface. While well-established theory relates bubble sizes to their natural emission frequencies, practical measurements are complicated by uncertainly on the sound amplitude emitted by individual bubbles, which is determined by the fluid dynamics of the event. An understanding of these processes is relevant to acoustic feedback and control of industrial systems such smelters, minerals-processors, bioreactors and wastewater plants. It is also relevant in measurements of oceanic wave breaking. Experiments are presented on two processes leading to bubble sound emission: the pinch-off of a bubble, and the coalescence of bubbles. These events inherently involve extreme perturbations to the interface. Numerical calculations by a two-dimensional axisymmetric compressible multiphase code are presented on both processes, yielding good predictions of the interfacial dynamics but, in general, less realistic predictions of the acoustic emissions.
Three-dimensional calculations by a parallel compressible multi-material flow code are also presented, in which the bubble was subjected to a small perturbation. The fundamental oscillation frequency of a bubble near a boundary showed a shift in natural frequency owing to the presence of the image of the bubble on the other side of the boundary. This simulation of the acoustic emission frequency compares very well with analytic theory and experiment, suggesting that the limitations of a finite computational domain are not significant at least for small perturbations to the interface. Thus, a three-dimensional, if computationally intensive, approach may in the future yield a comprehensive prediction of the amplitude of bubble sound emission.
Department of Mechanical Engineering, University of Hawaii at Manoa, Honolulu, USA
ABSTRACT
Atherosclerosis, the cause of myocardial infarction, stroke, acute coronary syndromes and ischemic gangrene, is a multifaceted disease. Atherosclerotic lesions or atheromata consist of asymmetric focal thickenings of the intima. Rupture-prone lesions may remain undetected and upon manifestation expose pro-thrombotic material from the core of the plaque to the blood and thus, transforming the stable plaque into vulnerable, unstable that is likely to rupture, induce a thrombus, and elicit an acute coronary syndrome. Moreover, this type of lesions has been attributed to more than half of all acute myocardial infarction incidents. The current conventional imaging methods for the detection of atherosclerotic lesions are intravascular ultrasound (IVUS), magnetic resonance imaging (MRI) and computed tomography (CT). Although these techniques have proven useful in the clinical practice, significant limitations exist. IVUS has the potential to characterize atheromata only in the vicinity of the ultrasound catheter, MRI's long image-acquisition time hinders the consistent imaging of structures and CT lacks the ability to visualize rupture-prone, non-stenotic, lipid-rich lesions.
The ability to recognize specific biological markers that occur when rupture-prone atherosclerotic plaque develops in normal artery walls could aid in the detection and thus, facilitating an earlier diagnosis. This study investigates the in vitro detection of vulnerable plaque with targeted ultrasound contrast agents (UCAs). Scanning acoustic microscopy (SAM) at center frequencies of 50 MHz and 100 MHz was used for the quantification of mechanical properties of excised artery tissue sections. Targeted UCAs were conjugated to specific antibodies and allowed to bind to sites of interest. Prior to the acoustic and epifluorescence investigation, artery sections with a thickness of 50 m and 60 m were obtained using a Leica CM3050S cryostat (Leica, Bannockburn, IL, USA). Following the acquisition of the RF data with SAM for the quantification of mechanical properties, backscatter coefficients and attenuation, the samples were prepared for histological staining and epifluorescence microscopy. The alignment of the optical and acoustic lenses allowed the determination of regions of interest (ROIs) which exhibited bound UCAs and ROIs without the presence of the agents. The concurrent epifluorescence and acoustic investigation of overlapping ROIs allows the direct comparison of mechanical properties of normal versus atherosclerotic plaque artery sections. The efficiency of UCA binding rates and the respective exhibition of backscatter and attenuation of these sites were examined. This preliminary study provides new insights on the potential for the detection of vulnerable plaque with intravascular ultrasound (IVUS) and targeted UCAs.
Inserm U930 CNRS ERL 3106, Université François Rabelais, Tours, France
ABSTRACT
Contrast agents, consisting of tiny gas microbubbles are currently approved for ultrasound imaging in cardiology and in radiology. The microbubbles have a mean size of about 3 microns and are encapsulated by a thin biocompatible layer. Multiple clinical studies have established the utility of ultrasound contrast agents (UCA) in improving accuracy of echography for the diagnosis of many diseases and in reducing health care costs by eliminating the need for additional testing.
Future clinical applications of UCA extend beyond imaging and diagnostic, offering to ultrasound technology a new therapeutic dimension. Since a few years, novel therapeutic strategies are explored using microbubbles and ultrasound. Our current data demonstrate that in the presence of microbubbles, ultrasound waves destabilize transiently the cell membrane allowing the incorporation of drugs, including genes into the cells. Moreover, the microbubbles might be used as a drug vehicle to achieve a spatially and temporally controlled local release. Besides, microbubbles are able to identify diseased targets through specific targeting.
(1) Medical Physics and Centre for Cardiovascular Sciences, University of Edinburgh, UK (2) University of Patras, Greece (3) Medical Physics, NHS Lothian, Edinburgh, UK
ABSTRACT
Ultrasound microbubble contrast agents can be used to image blood flow in vessels. Small blood vessels form around arterial plaques and in tumours, by angiogenesis, and microbubbles can be imaged in these vessels to aid diagnosis. Currently no specific ultrasound imaging techniques exist which can distinguish between microbubbles in large or small vessels, however in vitro it has been shown that microbubbles in small tubes have a different acoustic response to those in larger tubes. Imaging techniques optimised for microbubbles in large or small vessels, slow flowing or attached microbubbles would help microbubbles be used to their full potential. A system for the investigation of single microbubbles was modified to include tubes. Definity and BiSphere microbubbles were studied in 200um cellulose and 50um acrylic tubes. Data for free bubbles subject to the same acoustic field was also available for comparison. For all microbubbles a 6-cycle pulse of transmit frequency 1.6MHz was used for acoustic pressures 160-1000kPa. A Philips Sonos5500 research machine with s3 phased array transducer from which the unprocessed backscattered data could be obtained was used and the fundamental and harmonic backscattered pressure calculated.
For rigid shelled biSphere, at 550kPa in the 200um tube, the mean fundamental RMS pressure was 2.1 + 1.3Pa compared to 4.7 + 3.7Pa for free. For softer shelled Definity there was an increase in the harmonic backscatter with decreasing tube size. The mean harmonic RMS pressure for free microbubbles was 3.14 + 1.2Pa and 7.54 + 3.2Pa in the 50um tube. In addition, for Definity in the 50um tube no microbubbles at resonance were observed compared to 18% for free bubbles. For biSphere, the number of consecutive pulses for which an echo was detected was less in tubes than for free biSphere, e.g. at 550kPa 7% of free microbubbles were not detected after 7 consecutive insonations compared to 33% in the 200um tube and 39% in 50um tube. For Definity, fewer microbubbles were destroyed in the 200um tube, at 550kPa 81% of microbubbles were destroyed by pulse 7 compared to 98 and 97% in the 50um tube and free respectively. The results demonstrate that the origin of strong harmonic signatures from microbubbles in vivo is partly due to the presence of vessel walls also it is possible to detect small changes in microbubble behaviour. This provides valuable information on the acoustic response of microbubbles in tubes and for the development of signal processing algorithms.
(1) Centre de Biophysique Mol√©culaire UPR 4301 CNRS, Orl√©ans, France (2) INSERM U930 - CNRS ERL 3106, Tours, France (3) Institute for Medical Engineering, Ruhr-Universit√t Bochum, Bochum, Germany
ABSTRACT
Ultrasound (US) stimulation coupled to gaseous microbubbles (MB) is a new technique for efficient non viral gene delivery. Upon US exposure, MBs can be expanded, moved and even destroyed. These properties offer the opportunity of site-specific local delivery of drug or gene delivery. We have focused our in vivo investigation to optimize US assisted MB gene transfer in tendons. The rationale is to combine the possibility of transferring therapeutic genes and the beneficial US stimulation for tendinopathy. Using a luciferase reporter gene, we found that mice tendons injected with 10 μg of plasmid in the presence of 5x105 BR14 MB, insonated at 1 MHz, 200 kPa, 40 % duty cycle and 10 minutes of stimulation were efficiently transfected. The rate of gene expression was 100-fold higher than that obtained with naked plasmid injected alone and a sustained expression up to 100 days was obtained. The level of gene expression was correlated with plasmid copy number detected by qPCR. Such prolonged transgene expression might be due either to plasmid DNA (pDNA) chromosomal integration or episomal expression. We have been able to transform bacteria with DNA extracted from tendons having the highest expression level. These data prompt us to evaluate this technique to restore gene expression in fibromodulin KO mice. We found that only US and MB combination has allowed significant fibromodulin expression. This method offers the possibility of transferring genes that positively affect tendon healing and the beneficial stimulation effect of ultrasound reported as a physical therapy for tendinopathy. Ongoing experiments concern the use of optimal setup to transfect PDGF gene to stimulate tendon healing. The regeneration of injured Achilles tendons will be determined by histology, biomechanical tests, and measurement of collagens productions by qPCR.
A key to success of this technique lies in understanding mechanisms governing MB cells interactions. Several studies suggest that pore formation is less likely to be the dominant mechanism. Recently, it was suggested that endocytosis mechanisms might also be involved. We have started to perform real time confocal microscopy investigations on adherent live cells. Our preliminary data show that most of fluorescent pDNA was mainly localized at plasma membrane straightaway following insonation. At later time, punctuate staining like vesicles were progressively moved towards the nucleus. Experiments performed on cell tools show that pDNA is located in tardive endosomes 3h post insonation. Data obtained from these different studies will allow specifying limitations of US assisted MB delivery.
(1) ENI Val de Loire, Unité Mixte de Recherche, Imagerie et Cerveau, Université François Rabelais, Blois, France (2) Departamento de Fisica Aplicada, Instituto de Investigación para la Gestión, Integrada de las Zonas Costeras (IGIC), Universidad Politecnica de Valencia, Spain (3) Unité Mixte de Recherche, Imagerie et Cerveau, Université François Rabelais, Tours, France
ABSTRACT
Microbubble surface modes appear when contrast agents are insonified by high power ultrasound. In this case, the radius is in general a space-dependent function which can be expanded on the basis of spherical harmonics describing the spatial vibrational modes of the bubble. Modulational Instability (MI) is an instability of the vibration modes in nonlinear systems induced by an external driving, and can be physically interpreted as a nonlinear four-wave (or four-mode) mixing process. Starting from the continuous Marmottant model for the vibration of the bubble radius R(t), and a discrete nonlinear model of the discretized radius R(t,$theta_n$) with periodic boundary conditions and nonlinear coupling between surface modes, MI in bubbles is investigated numerically. After a first step of Symmetry Analysis applied to the nonlinear equations associated to the models, invariant properties allow an identification of similarity parameters that links both continuous and discrete models. The second step consists in studying numerically the MI instability criterion versus the amplitude of external field, representing the external insonified pressure that excite the bubble. The analysis reveals the existence of Intrinsic Localized Modes (ILMs), similar to those found in other more generic systems of coupled oscillators. The observation of ILM in 1D chains of coupled oscillators, presenting many similarities with the discrete bubble model, will be also presented with the objective to study the existence of spatio-temporal localized excitation leading to the breaking of the bubble. The perspective of this analysis is to define some practical optimized excitations of the bubble leading to drug delivery applications.
(1) Physics of Fluids, University of Twente, The Netherlands (2) Biomedical Engineering, Erasmus MC, The Netherlands (3) Philips Research, The Netherlands
ABSTRACT
Ultrasound contrast agents for organ perfusion imaging have recently been adopted as probes for molecular imaging as well as active targeted drug carriers for intravascular therapy. Given the poor optical contrast, fluorescence imaging is required to image the bubbles in stratified vascular flows in intravital microscopy and to visualize drug release, flow and resulting transfection through the cell membranes. The timescales at which bubble rupture takes place (microseconds and shorter) makes time-resolved fluorescence imaging of ultrasound-triggered drug release extremely challenging. Here we present details of the bubble rupture dynamics. The observations were made using a combination of a high power CW laser and the Brandaris 128 ultra high-speed imaging facility, giving unique insight into the physical mechanisms of local intravascular drug delivery. Recordings of fluorescently labeled phospholipid-coated contrast agents show an excellent delineation of the bubble wall at frame rates of up to 5 million frames per second. This allowed us to reveal the time-resolved dynamic distribution of the shell material, including lipid shedding. The improved contrast of fluorescence over bright-field imaging is highly relevant for combined optical and acoustical characterization of coated microbubbles, as their dynamics is governed by the non-linear behavior of the shell material. Oil-filled polymeric microcapsules (Philips Research) with a high dye concentration mixed in the hexadecane liquid core demonstrate a profound photo-acoustic effect when excited with a laser intensity of 1 MW/cm2 and higher. The dye molecules absorb the laser light leading to intense heating of the liquid core. A rapid phase change leads to an impulsive thermal expansion. The resulting vapor bubble dynamics imaged using fluorescence imaging at a timescale of the order of 100 nanoseconds revealed a typical oscillation frequency of 200 kHz. The resulting acoustic signature in the far field can amount up to 100 Pa at a distance of 2.5 cm from the capsule.
Department of Anatomy, Fukuoka University School of Medicine, Japan
ABSTRACT
Photocatalyzed titanium dioxide (TiO2) nanoparticles with UV light have been shown to eradicate cancer cells. However the required in situ introduction of UV light limits the use of such a therapy in patients. In this study, we evaluated the antitumor effect of TiO2 combined with low energy ultrasound (US) on a melanoma cell line (C32) in vitro and in vivo. In vitro, C32 cells were sonicated with or without the presence of TiO2 at an intensity of 0.5, 1 W/cm2 for 10s (Frequency; 1MHz, Burst Rate; 5Hz, Duty; 50%). Immediately after sonication, cell viabilities were analyzed by trypan blue exclusion method. Cell killing were observed as follows, control; 95.8 ±0.4%, alone TiO2; 95.3±1.3%, US alone; 92.4±2.1%, and TiO2 combined with US; 53.6±2.5% (1W/cm2, 10s). There was significant enhancement of cell killing by TiO2 and US. In vivo experiments using xenografts (nude mice), intratumoral injection of TiO2 with US exposure led to a greater degree of tumor regression than did the intratumoral injection of TiO2 or US alone. These results suggest that combination of ultrasound and TiO2 is an effective method of killing cancer cells. Further experiments are needed to evaluate the exact mechanism of this phenomenon. This technique may be used as a very safe, non-invasive cancer therapy.
(1) Faculty of Life and Medical Sciences, Doshisha University, Japan (2) Precision and Intelligence Laboratory, Tokyo Institute of Technology, Japan
ABSTRACT
Optical simultaneous observations for the vibration of microbubble are performed using a high-speed video camera and an LDV. As a method to observe the microbubble vibration, high speed camera is ordinary used, because it enables to capture a whole movement of the microbubble behavior in time variation. However, the frame rate of the camera is generally slower compared with the velocity of irradiated ultrasound. Additionally, due to the low spatial resolution of the observed pictures taken by high speed camera, it is difficult to measure the precise behavior of the microbubble. To solve these problems, a laser Doppler vibrometer was introduced with the ordinary high-speed camera observation system. Then both of the observed results were compared to each other.
An acrylic cylindrical observational cell attached to a bolt-clamped Langevin transducer was employed to trap tens-micrometers-size bubbles at the antinode of acoustic standing wave generated in the cell. LDV was located above the cell and its focal point was adjusted to a microbubble in the downward direction to measure the vibrational displacement amplitude. The focal point length of the LDV was 20 mm. As a result, the spherical bubble vibration at 27 kHz with the vibrational displacement amplitude of 2 m could be observed and the experimental results by two methods showed a good agreement. A nonspherical vibration was also observed. The radius versus time curve was quite similar to each other, but the vibration amplitude measured by LDV was about two times smaller than that measured by high speed camera. For the complicated behaviors such as nonspherical vibrations, we found out that the bubble vibration can be measured precisely by adjusting the focal point of LDV to the center of the bubble. By using the presented observation system, it is confirmed that more precise observation of bubble vibration behavior can be realized.
(1) Emmy-Noether Group, Institute of Medical Engineering, Department of Electrical Engineering and Information Technology, Ruhr-Universität Bochum, Bochum, Germany (2) Department of Engineering, The University of Hull, Kingston upon Hull, UK
ABSTRACT
The ultrasound-induced formation of bubble clusters may be of interest as a therapeutic means. If the clusters behave as one entity, i.e., one mega-bubble, its ultrasonic manipulation towards a boundary is straightforward and quick. If the clusters can be forced to accumulate to a microfoam, entire vessels might be blocked on purpose using an ultrasound contrast agent and a sound source. In this study, we analyse how ultrasound contrast agents with different shell compositions form clusters in a capillary and what happens to the clusters if sonication is continued, using continuous driving frequencies in the range 1-10 MHz. Furthermore, we show high-speed camera footage of microbubble clustering.
We observed the following stages of microfoam formation within a dense population of microbubbles before ultrasound arrival. After the sonication started, contrast microbubbles collided, forming small clusters, owing to secondary radiation forces. These clusters coalesced within the space of a quarter of the ultrasonic wavelength, owing to primary radiation forces. The resulting microfoams translated in the direction of the ultrasound field, hitting the capillary wall, also owing to primary radiation forces. We have demonstrated that as soon as the bubble clusters are formed and as long as they are in the sound field, they behave as one entity. At our acoustic settings, it takes seconds to force the bubble clusters to positions approximately a quarter wavelength apart. It also just takes seconds to drive the clusters towards the capillary wall. Subjecting an ultrasound contrast agent of given concentration to a continuous low-amplitude signal makes it cluster to a microfoam of known position and known size, allowing for sonic manipulation, including the release of its contents.
UMR INSERM U930, CNRS ERL 3106 and Université François Rabelais, Tours, France
ABSTRACT
Upon suitable excitation, microbubbles generate different nonlinear components such as 2nd harmonic, superharmonic or subharmonic components. Currently, due to the limited frequency bandwidth of PZT transducers, only a single nonlinear component is selected and imaged. Today, advantages of Capacitive Micromachined Ultrasonic Transducers (CMUTs) such as wide frequency bandwidth could be used in nonlinear contrast imaging. However, CMUTs are inherently nonlinear generating thus undesirable harmonic components. Thanks to compensation methods, it is possible to considerably reduce these unwanted components by modifying the excitation waveform and exploit CMUTs for nonlinear contrast imaging. We propose in this study to exploit the wide CMUT bandwidth to enhance the response from microbubbles by selective imaging of the 2nd harmonic and subharmonic components concomitantly.
Experiments were performed using a 128 elements CMUT linear array probe (Vermon, France) centered at 4 MHz. The CMUT bandwidth at -6 dB was higher than 120%. The probe was connected to an open scanner with analog transmitters (M2M, France). A 2-cycle excitation Gaussian pulse of 2 MHz, 700 kPa and 60% bandwidth was transmitted. First, optimal parameters for compensation of the nonlinearities of the probe (phase or amplitude) were estimated using hydrophone measurements in order to reduce the 2nd harmonic component generated by the CMUT probe. Then, contrast agent harmonic imaging was performed using a 1/2000 diluted solution of SonoVue in a flow phantom. Selective imaging at a wide frequency band including both the second harmonic (4MHz) and the subharmonic (1MHz) components was performed. Results obtained with CMUT probe were compared to those of a standard PZT probe.
Thanks to compensation method, the 2nd harmonic component is reduced by 16 dB. The Contrast to Tissue Ratio (CTR) increased by 11 dB when the compensation method is applied. Taking advantage of the wide band of the probe, the compensation procedure is applied to image both the 2nd and the sub harmonic concomitantly. Compared to subharmonic imaging alone, the addition of the 2nd harmonic component provides an increased SNR. These results demonstrate the ability to increase both CTR and SNR when wide band imaging is performed and reveal the potential of CMUT probes for contrast agent imaging.
Medical Physics and Centre of Cardiovascular Sciences, University of Edinburgh, Edinburgh, UK
ABSTRACT
Ultrasound microbubble (MB)-enhanced imaging is currently applied in the clinic for heart and liver diagnosis. The potential use of quantifying microvascular flow has been researched for over 20 years. More recent research has explored novel applications of MB technologies such as molecular imaging, enhancement of cell porosity, thrombolysis and drug/gene delivery. The slow progress of the field may be attributed to the lack of high quality experimental data that enable a thorough understanding of MB behaviour under realistic ultrasound fields and in realistic in vivo experimental settings, which in turn will facilitate translation of research into novel in vivo tools. The necessity for investigating the acoustics of single MBs stems from the lack of a single or a predictable distribution of their acoustic responses. In other words investigations of MB clouds are limited in providing information on the individual scatter components, thus making difficult the comparison of experimental and theoretical data, but also an assessment of the performance of signal processing algorithms. Single MB acoustics measurements have provided high quality data that may advance MB theory and signal processing research.
With the help of accurate calibration of MB scatter it is possible to observe and study physical phenomena such as resonance, the onset of transient cavitation, MB cracking, the different contributions of the shell, gas and environment including narrow tubing, and the various decay mechanisms. In addition pulse sequences available in commercial scanners can be assessed accurately. It is possible to capture large sample sizes of signal distributions and enable thorough signal processing analysis without the prerequisite of a model MB behaviour. It has been found that current pulse sequences, such as amplitude modulation, have a suboptimal operation as they exploit a small proportion of MBs, while they successfully cancel linear tissue echoes at low mechanical index imaging. In conclusion single MB acoustic measurements offer high quality data for the development of signal processing, improve the theoretical understanding of MB behaviour under well controlled experimental conditions, and can efficiently characterise different MB types, which is useful for the development of MB technologies.
Department of Anatomy, Fukuoka University School of Medicine, Fukuoka University, Japan
ABSTRACT
In resent studies we demonstrated that low-intensity ultrasound (US) combined with echo contrast microbubbles can be used for delivery of genes into tumor cells. In this study, we investigate the gene transfer efficiency of ultrasound with or without echo-contrast agent (BR14) under different acoustic conditions (e.g. intensity, duty cycles and exposure times) to determine if there are relationships between the gene transfer efficiency and ultrasound parameter. In vitro 2 g of the plasmid DNA (EGFP) and 2-5 x 106 BR14 microbubbles (Bracco, Milan, Italy) were mixed with 4 x 105 Chinese hamster ovary (CHO) cells at a final volume of 200 l (n=4). The cells were sonicated using an 0.8 cm diameter flat tansducer at the frequency 0.7 MHz (burst rate: 1.0 Hz; intensity: 0.6, 2.0 and 3.7 W/cm2; duty cycles: 20%, 50% and 70%; expose time: 20s and 40s). After the sonoporation, the percentage of PI negative cells as cell viability were determined and GFP-positive cells in 10000 live cells after 24 h culture were detected by flow cytometry analysis.
The Sonication resulted in significant loss of viability. Sonication with higher intensities resulted in increased loss of viability (up to about 30%), which was significantly enhanced intensity by the addition of BR14 (about 60%). The percentage of GFP-positive cells was just about 2-3%, 24-hrs after sonication. Intensity and duty cycles used in this study did not significantly chance the transfection rates. The percentage of GFP-positive cells was enhanced about 2 times by BR14. At intensity 3.7 W/cm2, despite the significant loss of viability compared to 2.0 W/cm2 at duty cycle 70%, did not show significant difference in transfection rates. However, increasing sonication time to 40 sec resulted in significant increase at 3.7 W/cm2, which increased with increasing duty cycles (up to 2 times) increase in transfection rate. The results showed that changing intensity alone does not significantly change the transfection rate. However, addition of BR14 significantly enhanced the transfection rate. In addition at 3.7 W/cm2 increasing the sonication time and at higher duty cycle transfection can be further enhanced.
Institute of Acoustics, Key Laboratory of Modern Acoustics, Ministry of Education, Nanjing University, Nanjing, P.R.China
ABSTRACT
Ultrasound contrast agents have show promising for ultrasonic molecular imaging, in which targeted agents selectively attach molecular markers expressed on diseased endothelium and increase contrast in the area such as thrombus and inflammation. Ultrasound radiation force can manipulate encapsulated microbubbles and displace them off the vessel axis in blood stream towards the vessel wall, thus increase the targeting efficiency in ultrasonic molecular imaging. However, the secondary radiation force produces a reversible attraction and aggregation of microbubbles, limiting the improvement of imaging sensitivity. This study proposed a theoretical model of second radiation force for encapsulated microubbles. In this theoretical model, the nonlinear radial oscillations of microbubbles are described by a modfied Herring equation including the change of the suface tension during oscillation, and coupled with the translation motions of microbubbles. This model is then used in a numerical investigation of the translational motion of encapsulated microbubbles in ultrasound molecular imaging. Results indicate that the secondary radiation force provides a significant effect on the aggregation of microbubbles, and its effect is associated with the ultrasound frequency, amplitude, and microbubble concentration. The results obtained are of interest for developing a high sensitive technique for detection of adhesive microbubbles from free microbubbles.
(1) Center for Industrial and Medical Ultrasound, Applied Physics Laboratory, University of Washington, Seattle, WA, USA (2) Department of Acoustics, Faculty of Physics, Moscow State University, Moscow, Russia
ABSTRACT
In High Intensity Focused Ultrasound (HIFU), thermal bioeffects are often accompanied by mechanical damage of tissue caused by bubbles. There has been significant interest in recent HIFU studies to control bubble activity to generate purely mechanical erosion of tissue without thermal coagulation. In our previous studies we have reported that this bioeffect can be reached using millisecond-long pulses that initiate explosive boiling in tissue within each pulse. In this work, specific protocols of HIFU exposures that result in either purely thermal, combined thermal and mechanical, and purely mechanical tissue ablation using millisecond pulses of shocks are presented. Experiments were performed in excised bovine heart tissue using a 2 MHz single-element transducer. The in situ exposure parameters varied: pulse duration (200 microseconds - 500 milliseconds), duty cycle (0.5% - 100%), total number of pulses (1 - 50), peak positive pressure (30 - 60 MPa), and peak negative pressure (8 - 12 MPa). Lesions in tissue were photographed and samples were collected for histological analysis. Purely mechanical tissue damage occurred when pulse lengths exceeded the time to boil by 20-50% and duty cycle did not exceed 1%. Cavities created in tissue were filled with red liquid and had a reproducible tadpole shape and size (15 mm by 5 mm). At higher duty cycles or longer pulses the liquid turned whiter in color and thickened because of thermal coagulation; starting from 20% duty cycle and/or pulses longer than 5 times the time to boil the lesions consisted of solid coagulated tissue with a vaporized core. Histological analysis demonstrated a homogenized mixture of lysed cells and matrix in mechanically eroded lesions and no structural disruption of cells in purely thermal lesions. We conclude that millisecond boiling is a reliable and controllable method to non-invasively erode tissue.
(1) Sate Key Laboratory of Medical Ultrasound Engineering, Institute of Ultrasound Engineering in Medicine, Departmente of Biomedical Engeering, Chongqing Medical University, P.R.China (2) The Second Affiliated Hospital of Chongqing Medical University, P.R.China (3) Department of Physics, University of Vermont, Burlington VT, USA
ABSTRACT
Basic research and clinical applications of HIFU to treat many life-threatening diseases such as uterine fibroids, osteosarcoma, liver cancer, breast cancer, and many others have attracted broad interests in global medical communities. HIFU has been considered as one of the most important developing technologies of modern medicine. Since 1988, researchers and medical doctors in Chongqing Medical University, China have been performing fundamental research, animal experiments, and clinical trials on HIFU technology. This presentation will serve a brief review of this development of the past 21 years. Due to the multi-disciplinary nature of this technology, this talk will focused on the inter-linkage of the three aspects: engineering, biology and clinical protocol. Examples of clinical treatments of liver, bone and breast cancers will be introduced; our treatment record shows that patients have survived longer than ten year after treatments. In addition, we also will report on applications of HIFU in the treatment of gynecological non-neoplastic diseases such as nonneoplastic epithelial disorders of vulva, chronic cervicitis and HPV infections.
(1) Department of Acoustics, Physics Faculty, Moscow State University, Moscow, Russia (2) Center for Industrial and Medical Ultrasound, Applied Physics Laboratory, University of Washington, Seattle, USA
ABSTRACT
In addition to heating and cavitation effects, an intense ultrasound beam gives rise to acoustic radiation force acting on a scatterer or the propagation medium itself. This can be used to move the scatterer in a desirable direction. Such a need exists during treatment of kidney stone disease, because residual kidney stone fragments often remain after extracorporeal shockwave lithotripsy, ureteroscopic laser lithotripsy, and percutaneous nephrolithotomy, and new stones may grow from those fragments. The goal of this work is to study theoretically the pushing effect created by an acoustic wave incident on a kidney stone. We simulated the radiation force imparted on a kidney stone by a high-intensity focused ultrasound beam. First, acoustic wave interaction with the stone and the corresponding scattering were modeled using finite differences based on elasticity equations. Then the radiation stress tensor was calculated and the radiation force was obtained given stone position and size. Finally, net force acting the stone was calculated by integrating the radiation stress along a closed surface surrounding the stone. The focused ultrasound beam parameters were taken in accordance with existing therapy system being used in our laboratory: an annular array of 2 MHz with focal lengths in the range 4.5-8.5 cm, up to 4-MPa bursts for 100 ms to 1 s. Numerical calculations showed that moderate intensity focused ultrasound could create radiation force that exceeds the stone weight.
(1) Department of Mechanical Engineering, Heriot-Watt University, Edinburgh, UK (2) Department of Design, Manufacture and Engineering Management, University of Strathclyde, Glasgow, G1 1XJ, UK
ABSTRACT
The manufacture of polymeric solid foams with an engineered distribution of mechanical properties has been possible by irradiating ultrasound on a viscoelastic reacting mixture. Structures with a heterogeneous pore size distribution offer great advantages when compared to homogeneous distributions in many applications that require strength with minimal amount of material (e.g. airplane wings). However, manufacturing solutions lag well behind the demand of these components. Sonication has been recently demonstrated as a potential technique that can support these materials fabrication processes. The mechanism involves bubble growth in a polymeric melt undergoing foaming that is influenced by the ultrasonic environment (i.e. sound pressure, frequency and exposure time). Once the foam solidifies, the final porosity distribution within the solid reflects the sonication conditions. In order to obtain sophisticated distributions of porosity and porosity gradients, fine control on the acoustic pressure field has to be achieved. This paper presents an attempt to correlate acoustic pressure to porosity gradation by comparison of simulated acoustic field and engineered porosity analysed on experimental polyurethane foams. COMSOL Multiphysics™ has been used to recreate the process in the irradiation chamber; and the acoustic fields, both in the environment and the reaction vessel, have been simulated and validated. Results from this study will allow the optimisation of the manufacturing process of functionally tailored materials with the sonication method.
Physics Department and Materials Science Program, University of Vermont, Burlington, VT, USA.
ABSTRACT
A liposome with a diameter ranging from 150 to 200 nm has been considered to be one of the optimal vehicles for targeted drug delivery in vivo since it is able to encapsulate drug and also circulate in the blood stream stably. Its small size, however, makes controlled release of its encapsulated content difficult. A feasibility study for applications of high intensity focused ultrasound (HIFU) of the mega-hertz frequency to induce controlled release of its content was carried out. This study, using the dynamic light scattering and transmission electron microscopic observation, demonstrated 21.2% of encapsulated fluorescent materials (FITC) could be released from liposomes with an average diameter of 210 nm when exposed to continuous (cw) ultrasound at 1.1 MHz ( ISPTA= 900 W/cm2) for 10 s and the percentage release efficiency can reach to 70% after 60s' irradiation. This result also reveals that rupture of relatively large liposomes (>100nm) and generation of pore-like defects in the membrane of small liposomes (<100nm) due to HIFU excitation might be the main causes of the release; the inertial cavitation took place during the irradiation. The controlled drug release from liposomes by HIFU may be proven to be a potential useful modality for clinical applications.
Monash University, Victoria, Australia
ABSTRACT
The theory of non-linear wave interactions leading to so-called interfacial wave turbulence, where a broadband distribution of capillary wave phenomena may be induced by a monofrequency oscillator, is well known, but experimental results are rare. In particular, it is challenging to set up a physical system where both capillary wave amplitudes are easy to measure and capillary forces dominate gravitational forces. Though capillary forces dominate at small scales, the small oscillation amplitudes and generally high oscillation frequencies preclude measurement via cameras or other traditional means. Instead, we use a laser Doppler vibrometer, capable of measuring oscillations up to 40 MHz, and providing a minimum detectable deflection of picometres. Using ultrasonic surface acoustic wave excitation at 19.5 MHz, we generate wave turbulence on the free surface of a water drop. Energy at the driving frequency does not directly enter the cascade, which does not persist into the MHz range; rather the driving frequency excites a low-frequency resonance. This resonance appears to, in turn, excite higher harmonics, forming the cascade of length scales seen in the frequency spectrum of wave heights. The initial low-frequency resonance, contrary to expectations via Faraday wave theory, is not at one-half the excitation frequency. Instead, we find the low-frequency resonance to be on the order of 100 Hz, which probably arises due to a balance of capillary and inertial forces; the Faraday wave is not observed due to the high frequency of the excitation. By condensing each spectrum to the value of its power exponent, we find that the turbulence decays as the electrical input power increases beyond 500 mW, a SAW amplitude of about 1 nm. At these powers the probability of very large waves deviates strongly from the Gaussian distribution, indicative of strong non-linearity. Wave turbulent theory is therefore invalid in this high-power regime as the highly non-linear nature of the waves violates the theory's fundamental assumption of weak non-linearity.
Institut d'Electronique, de Microélectronique et de Nanotechnologies (IEMN), Université Lille 1 and UMR CNRS 8520, Avenue Poincaré, 59652 Villeneuve d'Ascq cedex, France
ABSTRACT
Acoustic waves generated at the surface of a solid substrate can induce deformation, motion and even atomization of partially wetting droplets. The characteristic time scales associated with the droplets response strongly differ from the acoustic period, suggesting the existence of nonlinear coupling between acoustic waves and droplets dynamics. If different behaviors have been observed in different experimental conditions (droplet size, acoustic wave frequency, wetting properties of the liquid), the underlying physics remains unclear. To understand it, a parametric experimental study [P. Brunet et al., Phys. Rev. E, 81, 036315 (2010)] has been performed at a fixed frequency of 20 MHz, by varying the droplet size, the liquid viscosity and the acoustic wave intensity. In these experiments, the free surface of the droplet is modified in three different way: first a breaking of its symmetry, second global oscillations of the droplet and finally small amplitude and higher frequency "trembling modes". To explain all these deformations, two classical nonlinear acoustic driving can ve invoked: first the radiation pressure and second the acoustic streaming. The relative importance of these nonlinear phenomena strongly depends on the frequency considered. At 20 MHz, the acoustic wave is multiply reflected into the droplet and therefore the acoustic radiation pressure plays an important role. At higher frequencies, the acoustic wave hardly reaches the surface and the radiation pressure plays no role. With our experiments, we show that while both acoustic streaming and radiation pressure can induce the asymmetry of the droplet, global oscillations only appear when acoustic radiation is significant. We therefore exhibit for the first time the role played by the acoustic radiation pressure on droplets dynamics in a certain frequency range. The comprehension of these phenomena is of fundamental to minimize the energy required to handle droplet in view of harmless manipulation of biofluids.
MicroNanophysics Research Laboratory, Monash University and Melbourne Centre for Nanofabrication, Victoria, Australia
ABSTRACT
The transmission of acoustic waves through materials and across interfacial discontinuities is a centuries‐old area of research. A rather curious application of ultrasonic acoustic radiation—actuation of fluids and particles within them—has renewed interest in this area and exposed phenomena that are not explained by previous theories once viewed as canon. During the talk applications of these phenomena will be proffered, including fingernail‐sized microdevices to atomize sessile droplets for drug encapsulation, pulmonary drug delivery and nanoparticle formulation; devices for droplet jetting and manipulation; a device for fluid pumping and particle segregation in closed microfludics structures; and a device to enable micro and nanoparticle concentration and separation in a sessile droplet in a matter of seconds. Along the way, the underlying physical phenomena will be explored and explained, and the potential future of this area will bring the presentation to a close.
(1) Nanjing University of Aeronautics & Astronautics, P.R.China (2) Nanyang Technological University, Singapore
ABSTRACT
Ultrasonic trapping of small particles has great potential applications in micro machine assembling, particle separation, particle transportation, etc. In the conventional method, standing wave ultrasound is used to trap small particles in water and air at the nodal or anti-nodal points of sound pressure. We have proposed the method that uses ultrasonic radiation surface to trap small particles. In this report, the operating principle, structures and characteristics of the ultrasonic actuators that employ ultrasonic radiation surface to trap small particles are analyzed and given.
(1) Graduate School of Science and Technology, Shizuoka University, Hamamatsu-shi, Japan (2) Graduate School of Engineering, Shizuoka University, Hamamatsu-shi, Japan
ABSTRACT
Droplet manipulator is realized using surface acoustic wave (SAW) devices. Manipulation mechanism is due to radiate of a longitudinal wave into a liquid from the SAW. If a sensor is fabricated onto the SAW device, a novel integrated system of sensor and actuator is realized. The novel system is named "micro-laboratory". An interdigitated electrode (IDE) sensor was integrated on the SAW device and droplet impedance was measured. Also, using the IDE sensor, immobilization of a bovine serum albumin was observed. Validity of the proposed micro- laboratory was confirmed through experiments. However, the micro-laboratory has a disadvantageous point. Droplet is manipulated on a piezoelectric surface. When biomolecules are mixed in a droplet and a bio-reaction, such as immunoreaction, is measured, it is difficult to remove them from the surface. Residual biomolecules are contamination for the next measurement. However, because a piezoelectric substrate is expensive, the realization of a disposable micro-laboratory using a piezoelectric substrate is difficult. Our solution to solve the problem is a three-layer micro-laboratory, which is consisted of sensor plate/ matching liquid/ piezoelectric substrate. An interdigital transducer (IDT) for generating a SAW is fabricated on the piezoelectric substrate. At the interface of matching layer and piezoelectric substrate, a longitudinal wave is radiated. When the radiated wave reflects at boundary of matching liquid and sensor plate, a bulk acoustic wave (BAW) is generated into the sensor plate. A droplet on the sensor plate is manipulated by the elastic wave. We have fabricated three-layer structure of a cover glass/ distilled water/ 128YX-LiNbO3. Also, we have succeeded to manipulate of the droplet on the cover glass and to measure droplet impedance using the IDE sensor on the cover glass. Mechanism of the manipulation is the radiated longitudinal wave from the BAW. For the optimization of the three-layer type micro-laboratory, propagation characteristic of the BAW in the glass plate is important. First, the radiated longitudinal wave is visualized in a water tank. Using particle image velocimetry, the flow rate is measured. For comparison, the radiated longitudinal wave from the 128YX-LiNbO3 is observed. The results show that the flow rate of the three-layer type micro-laboratory is less than one-third of it of the 128YX-LiNbO3. This means that the highly applied power is required to manipulate on the three-layer type micro-laboratory. Moreover, streaming in the droplet is compared and the same phenomena are observed on the three-layer type micro-laboratory and the 128YX-LiNbO3.
Precision and Intelligence Laboratory, Tokyo Institute of Technology, Japan
ABSTRACT
Noncontact ultrasonic transportation of small particles around a circular trajectory was discussed. A 0.5-mm-thick aluminum disc with a diameter of 30 mm was employed as a vibrating plate and a 0.5-mm-thick PZT ring with inner and outer diameters of 8 and 14 mm respectively, was attached to the vibrating plate. On the basis of finite element analysis (FEA) calculations, the electrodes of the piezoelectric ring were divided into 24 pieces to generate a flexural vibration mode with one nodal circle and four nodal lines at the resonance frequency of 47.8 kHz. A circular plate having the same dimensions as the vibrating plate was installed parallel to the vibrator. It was used as a reflector to generate an acoustic standing wave in the air between the two plates. The acoustic field between the vibrating plate and reflector was calculated by FEA and the distribution of the acoustic radiation force acting on a small rigid particle was calculated to predict the position of the trapped particle. By switching the driving condition of the PZT ring in the circumferential direction, the acoustic field between the vibrator and reflector can be rotated and the trapped particle can be moved in the circular trajectory. Using a prototype of the vibrating plate, polystyrene particles with diameters of several millimeters could be trapped at regular intervals along the horizontal nodal line of the standing wave. The sound pressure distribution between the vibrating plate and reflector was measured by a fiber optic probe, and the experimental and calculated results showed good agreement. By switching the driving conditions of the divided electrodes in the circumferential direction, the nodal lines of the vibrating plate could be rotated and the trapped particle could be manipulated with a circular trajectory in air.
(1) National Institute of Advanced Industrial Science and Technology (AIST), Nagoya, Japan (2) The University of Electro-Communications, Tokyo, Japan
ABSTRACT
Noncontact micro manipulation technique is needed in micromachine technology, biotechnology and so on. The radiation pressure of ultrasound may be used for this purpose. In the present paper, a standing wave field is generated in a microchannel, it is possible to trap small objects at nodes of the sound pressure distribution in the medium. A microchannel of 1 mm x 50 mm x 1 mm was made at the center on a glass plate of 50 mm x 50 mm x 5 mm. A transducer of PZT is connected on the end of the glass plate. The sound wave should be transmitted into the microchannel through the glass plate. In the experiment, when the liquid water containing alumina particles was injected into the microchannel, the particles flowed along several layers. It was shown that the traveling wave was transmitted into the microchannel and the standing wave field was formed in the microchannel.
The particles were agglomerated in a geometric pattern in a half circular region made in a microchannel. The particles move to right when a half circular region at the right hand side of a microchannel is irradiated by ultrasound from the left. This phenomenon will be utilized for concentration of solid particles. Moreover, when the frequency of the ultrasound was swept in the flat microchannel, the particles were spatially shifted. It was able to control the direction of the particle flow by changing the ultrasound frequency in the branched microchannels. A sound field was numerically calculated by FEM method under the experimental condition and the experimental results were discussed.
Department of Physics, B.N.P.G. College, Rath, Hamirpur, Bharat, India
ABSTRACT
Rare-earth alloys with the IV-VI compounds semiconductors have been extensively studied in recent years because of their scientific and technology interest. One possible application for alloys of rare-earth elements with lead salts therefore resides in the emerging field of spintronics. Lead selenide is an important semiconductor which finds application in several devices including IR radiation and photoconductor detector as well as photovoltaic material. Solid solutions based on lead chalcogenides have been used as efficient materials for long wave lasers and also in the construction of infrared detectors for the 8-14 m atmospheric windows.
The knowledge of the non-linear properties plays an important part in providing valuable information about the mechanical and dynamical properties, such as inter atomic potentials, equation of state, and phonon spectra. Elastic properties are also thermodynamically related to the specific heat, thermal expansion, Debye temperature, melting point and Grüneisen parameter. The elastic constants are believed to be related to the strength of materials. Indeed, the latter has been often related to the bulk modulus, shear modulus, young's modulus, and poisson's ratio, which are frequently measured for polycrystalline materials when investigating their hardness. The aim of this work is to give a detailed description of the behavior of non-linear properties such as second, third and fourth order elastic constants of (PbSe) - (PbTe)1-X at an elevated temperature starting from 50K up to 1200K by using Coulomb and Börn - Mayer potentials. The evaluation of other non-linear properties such as first order pressure derivatives of second and third order elastic constants, second order pressure derivatives of second order elastic constants and partial contractions are computed using higher order elastic constants using this theory.
(1) Olympus Co. Ltd., Japan (2) The University of Electro-Communications, Tokyo, Japan
ABSTRACT
Recently, much attention has been paid to handling technologies in microfluidics. For a liquid in micro-scaled volume, interface interaction between the fluid and the walls of a container becomes dominant in mixing. Additionally, the Reynolds number is so small that it is not easy to mix different fluids by turbulent flow in a short amount of time. As a promising technique for realizing effective mixing in such microfluidic systems, acoustic streaming generated by surface acoustic waves (SAWs) has been given attention. It has been experimentally demonstrated that the streaming using SAW can achieve mixing dramatically in a very short time by radiating ultrasound beams in a liquid obliquely. Interestingly, SAW devices using plural interdigital transducers (IDTs) have advanced performance in mixing due to the spatial and temporal generation of chaotic flow. In this report, a new method is proposed that successfully uses a SAW device with a 128◦ Y-cut X-propagation LiNbO3 substrate by means of frequency-modulation (FM) technique to improve mixing efficiency. Since acoustic streaming depends strongly on ultrasound pressure fields, the spatial profiles of the beam emitted from the SAW device are measured in water using a miniature hydrophone with a small sensitive area. Flow fields that play an important role in liquid mixing are visualized by particle image velocimetry. Furthermore, the effectiveness of FM driving is evaluated by measuring mixing time for different kinds of liquids whose volumes are several microliters. The mixing time with varying sweep frequency, sweep time and modulation waveform as the parameters of FM signals is investigated. It has been revealed that such signals in driving SAW devices can change the spatial profiles of ultrasound pressure amplitudes and flow patterns as well. The thus obtained changing in flow profiles has demonstrated that an optimized sweep frequency can improve effectively the performance of mixing. Additionally, the more suitable the sweep time or the modulation waveform is, the higher the mixing effect becomes. The FM driving technique is expected to be a promising method for realizing chaotic mixing in microfluidic technology.
(1) Micro/nanophysics Research Laboratory, Mechanical and Aerospace Engineering, Monash University, Australia (2) Chemical Engineering, Monash University, Australia
ABSTRACT
Surface acoustic wave (SAW) atomization has been proved to be an efficient technique for delivering drug particles to the lung by inhalation. The inhalation therapy, also known as pulmonary drug delivery, is far more effective compared to other drug administrations, like oral and injection. Deliver naked drug particles is straight forward and convenient, however, the shear force, though minor through surface acoustic wave atomization, can still damage a certain amount of drug molecules, especially when involving fragile molecules such as DNA. Furthermore, drugs and vaccines delivered in-vivo are required to be biocompatible and biodegradable. Therefore, the encapsulation of bacteria, viruses, DNA, peptides, proteins and other therapeutic molecules within a biodegradable spherical shell of polymeric excipient is a vital vehicle for the controlled and targeted ophthalmic, oral, intravenous or implanted delivery of vaccines and drugs. The advantage of using layer-by-layer (LbL) polymer capsules is the ability to prepare monodispersed capsules with control over the capsule wall thickness, permeability, stability, and degradation characteristics. Selective polymer as each preferred layer is designed to be biocompatible and biodegradable, fulfilling the purposes listed above to a specific part of body while the successive releasing of drugs.
Traditional techniques used for particle formation and encapsulation are solvent extraction, phase separation and spray drying, which are however subjected to harsh conditions which can raise a high risk of drug damage. In comparison, PDMS microfluidic device offers the ability to continuously produce droplets monodispersed in size and shape. However, the particle size (usually 50-100 m) is too large to serve inhalation purpose and the amount of particles produced at a time period is limited.
Therefore, we propose to synthesize LbL nanoparticles using fast (SAW) atomization by atomizing one polymer into another, associated with a simple air-drying process in between. The particle size is controlled by the aerosol size D (1-10 m) and polymer concentration C. After air-drying, the particle size "d" will shrink to submicron or even tens of nanometers, determined by d ≈ C*D. We use chitosan and carboxymethyl cellulose (CMC) as model polymers. A serial of tests such as FTIR spectrum, zeta-potential test, and fluorescence microscopy have shown the successful bonding between chitosan and CMC. The size of the polymeric capsules is shown to be 198.2 nm, which is small enough to be carried by an aerosol for lung deposition.
Micro/Nanophysics Research Laboratory, Monash University, Melbourne, Australia
ABSTRACT
Miniaturised separators play an important role in the development and success of microfluidic systems and lab-on-a-chip devices. High frequency (MHz range) ultrasound was exploited to drive spatial separation and concentration of two different sized micro-particles within a droplet. Surface acoustic wave (SAW) devices were used to produce nanoscale wave propagation along the free surface of the piezoelectric substrate. Placement of a droplet on the substrate resulted in the excitation of a longitudinal acoustic wave within the fluid medium. This acoustic wave gave rise to two observed phenomena: acoustic radiation pressure and acoustic streaming. At high frequencies the acoustic radiation pressure, which acts over the surface of the particle generating a net acoustic radiation force, can become sufficiently large and comparable to the drag forces acting on the particle attributed to acoustic streaming of the fluid. The other key factor in determining the relative strength between the two phenomena is the size of the particle. An order of magnitude analysis, in which particle size and frequency was varied, revealed the extent to which each phenomenon contributes to the behaviour of the particle. For a 20 MHz SAW device, at small particle sizes (below 15μm in diameter), the drag force due to acoustic streaming was found to be dominant. However, at larger particle sizes (above 15μm in diameter), acoustic radiation force became an equal contributor. Subsequent experiments confirmed this analysis with complete separation of 6 and 31μm particles, whereby the 31μm particles were thrown to the periphery of the droplet in the direction of the acoustic radiation. The 6μm particles, on the other hand, remained within the bulk and were dragged along the streamlines due to the acoustic streaming. Thereafter, concentration of the 6 and 31μm particles was achieved in the bulk and on the free surface of the droplet, respectively. This separation technique shows widespread promise for the development of future microfluidic systems, with complete spatial separation and concentration demonstrated.
(1) Tokuyama College of Technology, Shunan, Japan (2) Kanagawa University, Yokohama, Japan
ABSTRACT
We devised a screw-shaped ultrasonic motor that incorporates three separate transducers. Three bolt-clamped Langevin-type longitudinal vibration transducers (BLTs) were installed in the shape of a screw in order to produce move rotor and boost rotor power. Ultrasonic motors have unique characteristics such as high torque at low speed, high holding torque, and silent motion; these characteristics make them superior to conventional electric motors. Therefore, ultrasonic motors are expected to find widespread applications. However, their applications are limited to certain mechanisms, e.g., autofocus mechanisms of cameras. One of the reasons why ultrasonic motors have not been widely utilized is that higher-torque ultrasonic motors have not yet been realized. Ultrasonic motors will find increased applications if their torque is improved. Such motors could then be used in high-torque applications such as robotic arms. It is difficult for practical traveling-wave-type ultrasonic motors to generate higher torque because they are fragile. In order to realize a higher-torque ultrasonic motor, a transducer with high strength and large amplitude is required. A BLT satisfies this requirement, and it is commonly used as a source of vibration in high-power ultrasonic applications. We had previously devised a one-piece-type screw-shaped ultrasonic motor that incorporates BLTs and a BLT connector. This configuration had a flaw in that the driving frequency of the motor was not synchronous with the resonant frequency of the BLTs. Therefore, the motor did not generate sufficient power.
In this study, we devised a screw-shaped ultrasonic motor that incorporates three separate transducers. The transducer consists of a BLT (diameter: 15 mm) and a stepped horn. The motor has free ends that are parallel to the emitting parts of the BLTs. Therefore, it would be easy to match the resonant frequencies of the motor and the BLTs. Vibration distributions were measured. The resonant frequency of the motor matched that of the BLT. In addition, the load characteristics of the motor were measured. The maximum torque, revolution speed, and efficiency of the ultrasonic motor were 0.67 Nm, 582 rpm, and 7.63%, respectively. The corresponding values of the previously devised one-piece-type motor were 0.41 Nm, 104 rpm, and 0.55%, respectively. This indicates an improvement in the characteristics of the screw-shaped ultrasonic motor. The transient response of the developed motor was also measured. The motor speed rose up to the stationary speed within 1.5 ms and fell down within 0.5 ms.
(1) Physics of Fluids Group, University of Twente, Enschede, The Netherlands (2) School of Dentistry, University of Birmingham, Birmingham, United Kingdom (3) Department of Cariology, Endodontology and Pedodontology, Academic Center for Dentistry, Amsterdam, The Netherlands
ABSTRACT
A crucial step during a root canal treatment is the irrigation, where an antimicrobial fluid is injected into the root canal to eradicate all bacteria from the root canal system. Agitation of the fluid using an ultrasonically vibrating miniature file has shown a significant improvement in the cleaning efficacy over conventional syringe irrigation. However, the exact cleaning mechanisms, being acoustic streaming, cavitation of the fluid or a combined chemical effect, are not fully understood. Here we investigate ultrasonically activated irrigation through experiments and numerical simulations in order to understand the relative importance of the three cleaning mechanisms. We combine high-speed imaging and micro-Particle Imaging Velocimetry to visualize the flow pattern and onset of cavitation in a root canal model (sub-millimeter dimensions), at timescales relevant to the cleaning processes, which is of the order of microseconds. High-speed microPIV measurements of the acoustic streaming around an ultrasonically oscillating file at frequencies of 20, 30 and 40 kHz are coupled to the oscillation characteristics of the file as simulated numerically and measured with a laser vibrometer. Comparison between the streaming pattern inside the root canal and in the free field shows the importance of the confinement of the root canal on the acoustic streaming. The results give new insight into the role of acoustic streaming for the cleaning of root canals.
Precision and Intelligence Laboratory, Tokyo Institute of Technology, Japan
ABSTRACT
For the simulation of ultrasonic air pump, the effective amendment using finite element analysis (FEA) is suggested. The pump induces airflow in a thin gap between a bending transducer and a reflector by exciting an intense sound field. As to numerical calculation of the acoustic streaming, approximate model based on the driving force of the acoustic streaming has been used. However, in the case of flow within far thinner gap compared to the wavelength, this method no longer keeps the accuracy. For example, the calculated result shows the flows of air from the sound pressure nodes, which is different from what is actually observed. We attribute this error to the fact that the effect of the gradient of the static pressure has been neglected, which appears from the nonlinearity of the intense sound field.
In this paper, we suggest the amendment toward the conventional analysis method, where the effect of the term of the static pressure gradient is considered. As an analysis procedure, first we carried out the full-fluid dynamics calculation of the pressure distribution in the thin layer between the transducer and the reflector through FEA. Secondly, from this pressure field, we obtained the sound field distribution and the static pressure distribution. Thirdly, from the sound field, we calculated the acoustic streaming driving force. Finally, we input the driving force and the static pressure gradient into the static flow analysis using FEA. With respect to the actual configuration of the device, the transducer consists of an aluminum plate (20x30x2 mm3) and a piezoelectric zirconate titanate plate (10x30x0.4 mm3) bonded on the back of the aluminum plate. The reflector is an acrylic resin plate of the same dimension as the aluminum plate and placed in parallel over the aluminum plate with a gap of 1 mm. Then, in order to obtain the directional flow, the reflector is tilted along the length direction within a range from 0 to 8 degrees.
The calculation shows that the flow of the air goes toward the sound pressure nodes, and then, as a result of the comparison with the measurement, it is in good agreement with the actual results in the shape of the flow distribution.
Laboratory for Optics, Acoustics and Mechanics, Department of Mechanical and Aerospace Engineering, Monash University, Victoria, Australia
ABSTRACT
The reduction in scale of fluidic based chemical and biological processes offers significant analytical and sensitivity improvements as well as reduced reagent usage, increased automation and reduced manufacture costs. Droplets de-posited on a planar surface offer a convenient way of investigating very small sample sizes. We investigate the effect of vibration of droplets in the direction normal to the surface on which they sit. When the contact line of the droplet is constrained by use of a very shallow well and suitable frequencies of vibration (order 100s Hz) are selected such that a resonant standing surface wave is established, collection of particles in predictable patterns can be achieved. When the droplet contact line is unconstrained high amplitude acoustic vibration (again order of 100s Hz) causes spreading of the droplet to occur. This effect can be so pronounced that during actuation the contact angle falls below that of the receding angle. We demonstrate the use of this effect by the merging of two droplets which are deposited a small distance away from each other. Once merged, a process which occurs due to surface energy minimisation as soon as the droplets spread such that they touch at one location, further vibration causes rapid mixing of the fluids through acoustic streaming.
(1) Physics Department,University of Allahabad, India (2) Department of Physics, S.P. Memorial Institute of Technology, Kausambi, Allahabad, India.
ABSTRACT
We have calculated the second and third order elastic constants of GaN nanowires at room temperature establishing the validation of the interaction potential model.The ultrasonic attenuation and velocity in the nanowires are determined using the non-linear elastic constants for different diameters(97nm-160nm) of the wires at nanoscale.Where possible,the results are compared with the experiments and discussed.Finally we established the correlation between the size dependent thermal conductivity and the ultrsonic attenuation of the nanowires.
Micro/Nanophysics Research Laboratory, Monash University, Victoria, Australia
ABSTRACT
Surface acoustic waves (SAWs) can offer a powerful method for driving fast microfluidic actuation and microparticle or biomolecule manipulation. We demonstrate that sessile drops can be linearly translated on planar substrates or fluid can be pumped through microchannels at typically one to two orders of magnitude faster than that achievable through current microfluidic technologies. Micromixing can be induced in the same microchannel in which fluid is pumped using the SAW simply by changing the SAW frequency to superimpose a chaotic oscillatory flow onto the uniform through flow. Strong inertial microcentrifugation for micromixing and particle concentration or separation can also be induced via symmetry-breaking. At low SAW amplitudes below that at which flow commences, the transverse standing wave that arises across the microchannel afford particle aggregation and hence sorting on nodal lines. Other microfluidic manipulations are also possible with the SAW. For example, capillary waves excited on a sessile drop by the SAW can be exploited for microparticle or nanoparticle collection and sorting. At higher amplitudes, the large substrate accelerations drives rapid destabilization of the drop interface giving rise to inertial liquid jets or atomization to produce 1-10 micron diameter monodispersed aerosol droplets. These have significant implications for microfluidic chip mass spectrometry interfacing or pulmonary drug delivery. The atomization also provides a convenient means for the synthesis of 150-200 nm polymer or protein particles or to encapsulate proteins, peptides and other therapeutic molecules within biodegradable polymeric shells for controlled release drug delivery. The atomization of thin films containing polymer solutions, in addition, gives produces a unique regular, long-range spatial polymer spot patterning effect whose size and spacing are dependent on the SAW frequency, thus offering a simple and powerful method for surface patterning without requiring physical or chemical templating.
Micro/Nanophysics Research Laboratory, Department of Mechanical Engineering, Monash University, Victoria, Australia
ABSTRACT
Ultrasonic and piezoelectric motors can be an attractive alternative to electromagnetic motors for end-effect devices such as microrobot joints, bio-medical and mobile device applications, due to their small size, compact structure, light weight and high mechanical output. In this paper, a novel ultrasonic micro linear motor that uses 1st longitudinal and 2nd bending modes, derived from bar type stator with a rectangular slot cut through the stator length, has been proposed and designed for end-effect devices of microrobotics and bio-medical applications. The slot structure plays an important role in the motor design, and can be used not only to tune the resonance frequency of the two vibration modes but also to reduce the undesirable longitudinal coupling displacement due to bending vibration at the end of the stator. By using finite element analysis, the optimal slot dimension in order to improve the driving tip motion was determined, resulting in the improvement of the motor performance. The trial linear motor, with a weight of 1.6g, gave a maximum driving velocity of 1.12 m/s and a maximum driving force of 3.4 N. A maximum mechanical output power of 1.1 W was obtained at force of 1.63 N and velocity of 0.68 m/s. The output mechanical power per unit weight was 688 W/kg. The value is roughly 10 times larger than the ceramic motor by Nanomotion (model HR1, stator size of 3 x 7.5 x 29 mm).
(1) Physics of Fluids Group, Faculty of Science and Technology, University of Twente, Enschede, The Netherlands (2) Océ Technologies B.V., Venlo, The Netherlands
ABSTRACT
Piezo drop on demand inkjet printers are used in an increasing number of applications for their reliable deposition of droplets onto a substrate. With this technique droplets of a few picoliters can be ejected from an ink channel with frequencies up to 50 kHz. Though, as was shown in earlier research, an air bubble can be entrapped into the ink channel. Such an air bubble has a profound effect on the channel acoustics, resulting in disrupted drop formation and possible failure of the ink channel. In this research a new Micro-Electro-Mechanical Systems (MEMS) based print head was studied. By using the piezo that actuates the channel as a hydrophone the acoustics inside the channel were measured. The measurements that were done during a channel failure revealed the possible presence of air bubbles inside the channel. A model was developed to calculate the channel acoustics and predict the effect of an air bubble on the acoustics. To verify this model and to confirm the presence of air bubbles optical measurements were required. As silicon is transparent for infrared light a setup was created to visualize air bubbles inside the channel. With this setup acoustical and optical measurements were acquired simultaneously. This model is now a valuable tool to calculate the presence, size and position of an air bubble inside an operating ink channel, without the need of optical access.
Tokyo University of Agriculture and Technology, Tokyo, Japan
ABSTRACT
Acoustic wave in gases is considered as a combination of pressure and motion oscillations. The pressure oscillation, usually accompanied by a temperature oscillation, can generate heat exchange between the gas and the wall of a waveguide. Therefore, the combination of the pressure and motion oscillations can transfer heat along tube's axis direction. The device using this acoustically caused heat pumping is called thermoacoustic refrigerator. This refrigerator needs no environmentally harmful working gas and it has only one moving part. Thus, a thermoacoustic refrigerator has an environmentally friendliness and a high reliability.
A thermoacoustic refrigerator is typically composed of an acoustic driver, an acoustic resonator and a structure with narrow flow channels called regenerator. Acoustic power is supplied to the resonator by the driver and is converted into heat flow in the regenerator through the heat exchange between the gas and the solid material composing the regenerator.
The conventionally used acoustic resonator consists on a straight tube with two closed ends. Thus, the acoustic wave excited in the refrigerator is a standing wave. Such a refrigerator, called standing wave thermoacoustic refrigerator, has an intrinsically irreversibility since it works through the thermal imperfect contact between the working gas and the regenerator. Therefore, in principle a standing wave thermoacoustic refrigerator cannot achieve Carnot's coefficient of performance (COP). To increase the refrigerator's performance a looped configuration of the resonator is proposed. The looped tube allows the excitation of a traveling acoustic wave in the refrigerator without essentially any dissipation. It is recognized that when a traveling acoustic wave propagates in a tube, the acoustically induced heat pumping can operate on the thermodynamic cycle similar to the Stirling cycle. Therefore, the traveling wave thermoacoustic refrigerator has a potential to achieve Carnot's COP. In this study, a traveling-wave thermoacoustic refrigerator was numerically optimized, and then, constructed. . The constructed refrigerator can generate a cooling temperature of -49 Celcius and obtain the COP of 0.5 at -22 Celcius corresponding to a COP relative to Carnot COP of 13%.
Department of Aerospace Engineering, Sharif University of Technology, Tehran, Iran
ABSTRACT
A thermoacoustic prime mover is composed of a regenerator, two heat exchangers, and a tube. The regenerator is sandwiched by the heat exchangers in the tube. When a steep temperature gradient is set up along the regenerator by the heat exchangers, an acoustic wave accompanying the pressure and cross-sectional mean velocity is spontaneously generated in the tube which forces a gas parcel in the regenerator to experience a thermodynamic cycle consisting of the compression, heating, expansion, and cooling. As a result, the energy conversion of heat flow into work flow occurs without involving moving parts. Accurate numerical modeling of losses in thermoacoustic engines and refrigerators requires extensive numerical simulation of coupled mass, momentum, and energy conservation equations. Heat transfer and fluid flow irreversibility in the regenerator are the major source for losses in Stirling-Type Engines. In this regard, one may consider exergy analysis as an effective mean to identify and quantify these losses.
In this paper, a thermodynamic model for a thermoacoustic prime mover is developed by considering exergy flow analysis. The main sources for irreversibilities are incorporated in the model. While the first law of thermodynamic is employed to obtain the tendency of total heat addition, net work output, and thermal efficiency, the second law is considered for evaluating the total entropy generation of the Stirling cycle. Further, the entropy generation of different components of the cycle is investigated. For this reason, by assuming reasonable correlations for fluid friction and heat transfer in the regenerator, the variation of the thermodynamic parameters including the exergy through the regenerator, from the cold end to the hot end, is investigated. Further, the impact of some important parameters on the exergetic efficiency is discussed. In addition, a performance criteria based on exergetic efficiency of the regenerator is defined and evaluated.
Department of Aerospace Engineering, Sharif University of Technology, Tehran, Iran
ABSTRACT
A traveling wave thermoacoustic engine has been paid more and more attention in recent years due to its potential of realizing higher efficiency than a standing-wave thermoacoustic heat engine. Using a tapered resonator to increase the performance of thermoacoustic heat engines has been qualitatively explained. Moreover, experimental study of the effect of the resonator shape on the performance of a traveling wave thermoacoustic engine is reported. Actually, the tapered resonator can effectively suppress excitation of higher mode oscillation and let thermoacoustic conversion focus on the fundamental mode oscillation. This paper presents the results of a numerical simulation of heat and fluid interaction of a viscous compressible oscillating flow for a proposed contoured resonator of a thermoacoustic Stirling heat engine. While a tapered resonator introduces the variation of the cross section of the resonator with a constant cone angle, the present results indicate that the performance is further improved by local optimization of the cone angle along the resonator. Indeed, the performance of the heat engine is enhanced by decreasing the nonlinear losses. The continuity equation, the two-dimensional Navier-Stokes equation, and the conservation equation of energy for viscous compressible gas are numerically solved by employing the finite element method. The magnitude and the distribution of the secondary flow are examined. A comparison is made between uniform cross-section resonator, tapered resonator, and contoured resonator.
Laboratoire d'Energétique Équipe de Thermique Énergies solaire et Environnement Faculté des Sciences, Tétouan, Morocco
ABSTRACT
The research of the conditions of comfort in the habitat passes by a better knowledge of the thermal and acoustic behavior of porous materials used in the construction and the insulation of the buildings. This research task aims at developing a general study which approaches at the same time the thermal and acoustic aspects the concrete alleviated as being porous material. By using of measuring equipment made up of sonometer, frequency Generator and of transitory acquisition system managed by software oscillosound3, we proceed by simultaneous measurements several acoustic parameters (noise Level, phonic reducing and Cœfficient transmission) of studied material.
(1) Doshisha University, Kyoto, Japan (2) University of Shiga Prefecture, Hikone-City, Japan
ABSTRACT
An electric generation system by using the thermoacoustic phenomena is proposed as a technology to improve global warming and depletion of energy resource. The prototype of thermoacoustic cooling system that was previously studied is applied. Sound pressure generated by the presented system is expected to be extremely strong compare with the ordinary environmental sounds. Actually, the sound pressure level is over 160 dB, so that the particle displacement of 3 mm is realized at the frequency of 100 Hz. Therefore, the thermoacoustic electric generation system can be designable by using the convert methods of the piezoelectric element or the magnetic induction. However, it is difficult to apply the piezoelectric element to the electric generation system because the electric impedance of piezoelectric element is too high. In this study, the loudspeaker is used as an electroacoustic converter of the magnetic induction method. The thermoacoustic system of loop-tube-type is used as the prototype system. The open-end-type resonance tube whose length is adjusted to the 1/4 length of the loop-tube is added on the loop-tube system to realize the stable generation of the sound. The possibility of electric generation was investigated using presented prototype system. The stable sound generation is observed even if a part of the loop-tube is opened. The connected position of the resonance tube is experimentally selected by moving the position for the best point where the maximum efficiency of energy conversion is realized. The prototype system consists of the 3300 mm length of the loop-tube and 825 mm length of resonance tube. The inner diameter of both the tube is 42 mm. The full-range loudspeaker is located at the end of resonance tube and the energy conversion from heat to electricity is observed in various conditions. The sound pressure level of over 160 dB is realized by the presented system. Furthermore, it is confirmed that electric generation of active power of 1.1 W by the proposed thermoacoustic system.
Department of Mechanical Engineering, McGill University, Quebec, Canada
ABSTRACT
The flow field in an acoustic standing wave tube was measured using time-resolved particle image velocimetry (PIV). Verifications were made through comparisons between measured and predicted acoustic particle velocities in the spa-tial domain and the time domain. The accuracy of the time-resolved PIV system was satisfactory, at least for the peri-odic flow velocity component. The steady streaming flow field was then obtained through synchronous data acquisi-tion. The streaming flow featured recirculation patterns which were different from classical Rayleigh or Schilchting streaming patterns. One possible reason is that the streaming Reynolds number was too low for classical streaming to occur.
(1)University of Shiga Prefecture, Hikone, Shiga, Japan (2)Doshisha University, Kyoto, Japan
ABSTRACT
For the practical application of a loop-tube-type thermoacoustic system, it is important to improve its energy conversion efficiency. We propose a loop-tube-type thermoacoustic system with the diverging sub loop tube. The sub loop tube diverges from the main loop tube and rejoins it so that the sub loop tube forms a loop. The sub loop tube creates a boundary condition in which particle velocity is decreased as the cross section area of the system is increased in the diversion part extension. This adjusts the phase difference between the pressure and the particle velocity at the prime mover so that phase difference between them becomes smaller. This contributes to improve the efficiency of heat-to-sound energy conversion. The main loop tube was 0.85 m in height and 0.5 m in width, and had a total length of 3.3 m. The sub loop tube's length from the upper side to the lower side was 0.35 m. The sound pressure in main loop tube is measured using system with and without the sub loop tube. The positions of the sub tube were changed so that the distance from the heater to the upper part of the sub tube was either 1.73, 1.83, or 1.93 m. The phase difference and the sound intensity are calculated using a two-sensor power method with pressure measurement results. The obtained phase difference distribution shows that the smallest phase difference distribution was observed when the sub loop tube diverges at the position of 1.93 m. The largest sound intensity of 13 kW/m2 was observed when the sub loop tube was positioned at 1.93 m; the smallest of 0.64 kW/m2 was observed without the sub tube. From the results, it is confirmed that the diverging sub loop tube decreases the phase difference between the pressure and the particle velocity at the prime mover and increases the heat-to-sound energy conversion efficiency.
(1) Creative Design Studio on Technology, Graduate School of Engineering, University of Osaka, Suita, Japan (2) Department of Mechanical Science, Graduate School of Engineering Science, University of Osaka, Toyonaka, Japan
ABSTRACT
This paper revisits derivation of marginal conditions of thermoacoustic Taconis oscillations in a helium-filled, quarter-wavelength tube subjected to smooth temperature distribution. As is well known, the linear stability analysis is developed by N. Rott (1973) and the marginal conditions are obtained for a step temperature distribution by taking full account of thermoviscous effects. In this attempt, the boundary-layer theory is regarded as being incapable of deriving the conditions. But it has recently been revealed that the theory is applicable for a short-time behaviour in any situations. Using this theory, the marginal conditions for a smooth temperature distribution are obtained and checked against the results by Rott.
Dividing a field in the tube into a boundary-layer and an acoustic main-flow region outside of it, the boundary layer is assumed to be described by the linear and first-order theory in its thickness. Fluid-dynamical equations are averaged over the cross-section of the tube, from which one-dimensional equations over the main-flow region are derived by using the boundary-layer solutions. Effects of the boundary layer appear through the memory integrals expressed in terms of half-order derivatives. For a smooth temperature distribution of the tube wall increasing monotonically toward the closed end, an initial and boundary-value problem of the one-dimensional equations derived is solved numerically for evolution of a small disturbance. Neglecting radiation from the open end, the excess pressure is required to vanish at the open end, while the boundary layer at the closed end is taken into account for the axial velocity. As long as the ratio of the temperature at the closed end to that of the open one is below a certain value, the initial disturbance is decayed out by thermoviscous dissipations. But when the ratio exceeds it, the initial disturbance becomes unstable to grow in amplitude. Between the stable and unstable regimes, there exists a critical temperature ratio for oscillations to persist for a long time. If the magnitude of the initial disturbance is taken to be small enough, the condition of this critical temperature may be regarded as the marginal curve by the linear-stability analysis. For several temperature distributions, the marginal curves are obtained numerically and compared with the ones due to Rott.
Department of Mechanical Science, Graduate School of Engineering Science, University of Osaka, Toyonaka, Japan
ABSTRACT
This paper examines a linear propagation of thermoacoustic waves in a gas enclosed in a narrow channel subject to temperature gradient axially and extending infinitely. An analysis is made to derive a wave equation by assuming that a typical axial length is much longer than the channel width but a thickness of thermoviscous diffusion layer is arbitrary relative to the width. It is shown that the system of equations is reduced to a spatially one-dimensional wave equation for a pressure given in the form of an integro-differential equation due to memory by thermoviscous effects. This equation, called a thermoacoustic wave equation, can describe a spatio-temporal evolution of any form of pressure disturbance. If time-harmonic oscillations are concerned, it is reduced to Rott's equation for the pressure amplitude. Approximations of the wave equation are discussed based on a Deborah number, which is a dimensionless parameter of a time scale relative to a diffusion time. For a short time, i.e. a large Deborah number, the equation is shown to be simply the one derived by the boundary-layer theory, while for a long time, i.e. small Deborah number, it is reduced to a wave-diffusion equation. It is unveiled from this that the thermoviscous effects combined with the temperature gradient give rise to wave propagation toward the positive direction of the gradient. If the gradient is steep, they give rise to negative diffusion so that the convective instability will occur.
(1) Doshisha University, Kyoto, Japan (2) University of Shiga Prefecture, Hikone-City, Japan
ABSTRACT
A new silencer system by using the thermoacoustic phenomena is proposed. The sound energy is converted to the heat energy by the thermoacoustic system, and then the sound is muted. Key device of presented silencer is a stack which consists of many narrow channels less than 1 mm. To keep the system in continuously working, the temperature difference is stably realized at both the sides of stack by the heat exchangers. When the sound wave is inputted to the stack of high temperature side, the energy of sound is attenuated. On the other hand, when the sound wave is inputted the stack of low temperature side, the energy of sound is amplified. The conversion from the sound to the heat is caused by two mechanisms. Those are the heat exchange between the stack wall and fluid particle and the viscous dissipation in the stack. However, it is difficult to measure them separately. In this study, we proposed to separate those mechanisms by comparing both the effects of the amplification and the attenuation by the experiments. Under the amplification effect, the sound energy is amplified by the heat exchange and although attenuated by the viscous dissipation. However, under the attenuation effect, sound energy is attenuated by both effects. Therefore, by comparing both the effects in the same experimental conditions, the contribution ratio of these effects can be shown in quantitatively. Obtained results are discussed from a thermacoustic silencer point of view.
Tokyo University of A&T, Tokyo, Japan
ABSTRACT
A thermoacoustic engine consists of a resonance tube and a stack that is composed of many narrow tubes; the stack is located inside the resonator. When the ratio of the temperatures at two ends of the stack exceeds a critical value, a gas in the resonance tube spontaneously oscillates and the thermoacoustic engine works. Recently, a miniaturization of thermoacoustic engines has attracted much attention due to its simplicity and high-efficiency potential. In this study, the stability limit of the spontaneous gas oscillation in a thermoacoustic engine is numerically calculated by using thermoacoustic theory. In this calculation, the ratio of the lengths of resonator and stack is taken as one of parameters. This is because when one designs a small-scale thermoacoustic engine, one wants to make a stack length as long as possible to reduce thermal conduction loss along the stack. As a result of the calculation, it was fond that the length ratio largely affects the critical temperature ratio needed to cause the spontaneous gas oscillation.
CSIRO Process Science and Engineering, Waterford, WA, Australia
ABSTRACT
Flotation cells are commonly used in the minerals industry to separate fine particles of a mineral of interest from reject (gangue) rock. In these devices hydrophobic particles (usually the mineral) become attached to bubbles generated in a pulp (slurry) region and rise into the froth region at the top of the flotation cell where they become product. The gangue drops to the bottom of the tank and is removed as waste. An Acoustic Emission (AE) passive monitoring system has been developed for analysis of pulp and froth process state in flotation cells. The formation, flow, coalescence and collapse of bubbles in the pulp and froth of a flotation cell naturally generate strong AE signals that may be used as indicators of process state and hence are potentially of great value to industry. A multiple sensor AE monitoring system has been developed and tested in plant trials on flotation cells of varying types at coal washeries.
This paper discusses the development of the AE passive monitoring system for flotation cell process state. Results presented in the paper are based on flotation cell monitoring over a broad range of operating conditions. Both hydrophones and broadband accelerometers mounted at various locations in a flotation cell are utilised to characterise the process state in the pulp and froth. Fourier signal analysis is used to characterise large changes in sensor response, detected as a function of process parameters including aeration rate and pulp flow rate. A linkage between AE characteristics and bubble solids loading is discussed. It is concluded that the approach of passive AE monitoring and signal analysis and modelling could provide valuable on-line information on washery flotation cell process efficiency.
Materials Science and Engineering Division, CSIRO, Clayton South, Victoria, Australia
ABSTRACT
Commonly observed damage in wood products and wood-based composites are: wood fibre fracture, delamination between plies or debonding of wood-adhesive layers. Delamination which is probably the most frequently observed damage, may be produced during manufacturing or, during in service loading such as accidental excessive loading produced for example by snow or, by fatigue in highly variable environmental conditions of temperature and humidity. Damage detection in general and delamination in particular is a very important issue in the context of structural health monitoring for mechanical engineering infrastructure with elements in wood and wood-based composites. The development of computational techniques in the last twenty five years, and the progress achieved in mechanical characterisation of solids in general and of composites in particular, affected positively the development of the modelling of wood mechanical behaviour in function of its structure. Related studies clearly suggest that delamination in solid wood can occur between different layers of the cell wall at submicroscopic, microscopic and macroscopic structural levels. With respect to wood-based composites, the behaviour of two groups of products has been analysed: the laminated products and the fibre-based products. Delamination detection studies were summarized in the context of structural health monitoring. The review of the theoretical aspects related to the detection of damages induced by delamination in composites was oriented in two main directions: (1) the nondestructive evaluation method using an ultrasonic technique with Lamb waves, which is an experimental method able to provide local damage information and (2) the model dependent method, undertaken analysis of structural models implemented by finite element analysis and able to provide global damage information, for linear and non-linear mechanical behaviour of the system
The structural health monitoring process of big wood laminated structures, in light of normal aging and degradation resulting from operational environments, must involve the periodic inspection of the system using: (1) sampled dynamic response measurements from an array of transducers, establishing their number, resolution, bandwidth, data acquisition (periodically or continuously), storage and transmittal hardware; (2) extraction of the damage - sensitive features, normalization of data by the measured inputs or by environmental cycles (summer, winter); and (3) statistical analysis of data to determine the current state of the system. The development of wood - based composites testing methodologies should be encouraged as part of the efforts being made to control the performance of low cost building materials.
Institute of Telecommunications, Teleinformatics and Acoustics, Wroclaw University of Technology, Wroclaw, Poland
ABSTRACT
Ultrasonic transmission tomography is one of the measurement methods used for imaging of internal structure of various media. The method can be used in gaseous media, e.g. to identify the shape, size and location of objects, to determine the spatial distribution of temperature in a studied area (if here is a heterogeneous field of that quantity) and to determine concentration of components of selected binary gas mixtures. The non-invasive nature and short duration measurement process are the great advantages of this method. All applications of the method in gas media it is necessary to use suitable ultrasonic transducers intended for operation in a gaseous medium in the frequency band of 20 kHz - 100 kHz, that generate cylindrical wave. The paper presents the designed ultrasonic transducers constructed using piezoelectric film. They are capable of generating cylindrical wave at the frequency f = 31.5 kHz, f = 63 kHz and f = 90 kHz and are intended to work in gaseous media. The focus is on electromechanical model of transmitting and receiving transducer and the properties of EMFi film and powering systems (using burst type signal) are taken into consideration. It also presents the results of measurements of characteristics of directivity of transmitting transducers and the results of measurements performed using those transducers operating as ultrasonic wave receivers. Additionally, examples of applications of ultrasonic transmission tomography in gaseous media are presented.
Institute of Acoustics, Key Laboratory of Modern Acoustics, Ministry of Education, Nanjing University, Nanjing, P.R.China
ABSTRACT
Acoustical methods are playing an important role for nondestructive evaluation (NDE) of adhesive-bonded composites and components in industrial applications. A dual-frequency ultrasonic technique is proposed for the quantitative evaluation of contact strength between pressed solid surfaces. An ultrasonic excitation consists of two primary frequency components is applied perpendicular to the interface, and the transmitted wave is examined. Theoretical study is based on a perturbation analysis of contact acoustic nonlinearity (CAN) model, predicting the generation of difference and sum frequency waves, together with the second harmonics. Nonlinear parameters are defined to describe the nonlinearity generation efficiencies. Experiments are performed for three types of interfaces, i. e. the interfaces of two aluminum alloy blocks with and without couplant and two glass blocks. The difference frequency wave component has bigger generation efficiency than other nonlinear components, which offers an advantage of high SNR and good detection capability of contact stiffness. For each interface, the first and second-order interfacial stiffness are measured with contact pressure increasing from near zero to about 0.8 MPa with the aid of a laser interferometer. Finally, numerical simulations are also carried out, and a consistency is found between measurements and calculations. The dual-frequency ultrasound sent to the interface generates at least four second-order nonlinear components, which enriches the CAN technique for interface quality examinations. Both measured and simulated results indicate an increase of interfacial stiffness and decrease of nonlinear parameters with growing contact pressure. Moreover, measured results show that couplant between interfaces influences the contact stiffness evaluations in an enhanced manner, while the contact pressure determined by measured interfacial stiffness values are underestimated due to the couplant. The main problem comes from the contact between the transducer-sample interfaces, which brings extra nonlinearity to the detected signals and affect the accuracy of measuring.
Fraunhofer Institute for Nondestructive Testing, Saarbrücken, Germany
ABSTRACT
The macroscopic elastic properties of multi-phase and polycrystalline materials depend on their microstructure and the elastic constants of the different phases. Hence nondestructive characterization of such materials requires techniques to probe elastic properties on a micro- or nanoscale. Atomic force acoustic microscopy (AFAM), a near-field technique which combines atomic force microscopy (AFM) with ultrasound is convenient for this purpose. AFAM is a contact resonance spectroscopy technique allowing one to measure elastic properties of surfaces with a local resolution in the nm range. Similar to nanoindentation it is sensitive to the indentation modulus which accounts for the normal and shear deformation in the tip-sample contact zone. We imaged the contact resonance frequency and contact stiffness distribution in nickel base alloy 625, 9Cr-1Mo ferritic steel, and the most commonly used titanium alloy Ti-6Al-4V. In nickel base alloy 625 and 9Cr-1Mo ferritic steel precipitates were visualized, and their indentation moduli were determined using the indentation modulus of the matrix as a reference. Ti-6Al-4V primarily consists of two different crystal structures: the hexagonal close-packed (hcp) α-phase and the body-centred cubic (bcc) β-phase. Depending on the thermodynamic heat treatment the α- and β-phases form different microstructures and arrangements which can be clearly imaged by AFAM. A correlation of applied heat treatment and resulting microstructure was aimed for to provide with guidelines for materials design. For the quantitative evaluation of AFAM data we used analytical models and finite element analysis of the vibrating cantilever.
Department of Applied Physics, Polytechnic School, University of Extremadura, Spain
ABSTRACT
In this work, we calculated the ultrasonic velocity of compression (vL) and shear (vT) waves, and the ultrasound elastic constants of mortars. The 14 specimens investigated were manufactured using different cement (22.5, 32.5 and 42.5 N/mm2) and water / cement ratio varying from 0.4 to 0.6. Each specimen was made with 2 distinct geometries: prismatic (4x4x16 cm3) and cylindrical (30 cm length and 15 cm diameter). Firstly, we found that the prismatic samples, with dimensions exactly those set out in the Spanish regulatory norms for the evaluation of the mechanical resistance of cements, were too small for the ultrasound frequencies used (200 kHz). This implies that erroneous values will be obtained in determining the ultrasound parameters of mortars made with this geometry, and hence that there will be no possibility of establishing simple mathematical relationships between those parameters and the non-ultrasound variables. Nevertheless, the samples made with the two geometries presented similar mechanical properties. At the same time, the knowledge of other parameters (flexion/compressive strengths) of these mortars allowed us to study different correlations between the ultrasound and the non-ultrasound parameters. Of special interest in these results were the following: (1) The prismatic samples are not valid for carrying out the ultrasound study due to their small size, while the cylindrical ones are (2) The strengths of the samples made with the same water/cement ratio can be quantified in situ from the ultrasound variables.
(1) Department of Applied Physics, Polytechnic School, University of Extremadura, Spain (2) Department of Food Technology, Centro Tecnológico Agroalimentario Extremadura (CTAEX), Spain
ABSTRACT
Ultrasound has been used to non-destructively asses the quality of many foods. This paper raises the non-invasive ultrasonic method to control the change in physical properties of organic cheese (Torta del Casar) made from the milk of sheep. For that purpose, we firstly researched the changes of ultrasonic velocity, attenuation and harmonic components during the renneting process by pulse-echo method at different frequencies (500 kHz and 1 MHz). The changes in the liquid media induce these variations. The pH and temperature of the rennet-induced milk sample was also measured simultaneously with the ultrasonic measurements. Total of 3 experiments were conducted in a laboratory environment at [25.6-27.6] oC, [23.7-28.2] oC and [32.0-33.1] oC, respectively. The cutting times determined from ultrasonic measurements were compared to the cutting times from manual methods. Then, we study the velocity and FFT of the longitudinal and shear waves during cheese maturation by an ultrasonic transmission technique at different frequencies (50 kHz, 100 kHz, 250 kHz and 500 kHz). The maturation experiments at different days were performed on 32 blocks of Torta del Casar, including commercial cheeses. The detection temperature interval was [5.0-7.0]oC. From the obtained results, it appears to be possible to use an ultrasonic device to non-destructively monitor the cheese manufacturing processes (renneting process and maturation).
(1) Graduate School of Systems and Information Engineering, University of Tsukuba, Kanto, Japan (2) Department of Pure and Applied Physics, Faculty of Engineering Science, Kansai University, Japan
ABSTRACT
Phononic crystals have various characteristics, like band gap, group delay and negative refraction. Among them, we regard the negative refraction. Focused ultrasounds using negative refraction by phononic crystals are investigated by many researchers. The focused ultrasound is expected in the medical field and so on. However, when the ultrasonic wave propagates in the phononic crystal, the wave attenuates acutely. After once the crystal is composed, the focal length is fixed. It is desired to vary the focal length of phononic crystal for such fields. In our previous research, we proposed the dual structured phononic crystal. This structure has a gap between the two phononic crystals. It was verified that the focal length was varied by changing the thickness of the gap. Additionally, it was confirmed that the attenuation of this proposal structure is lower than that of a single phononic crystal of the same thickness. In this research, we examined the relationship between the characteristics of focused ultrasound by layer structured phononic crystal and the crystal structure of each layer, using finite element method (FEM) and the phononic crystal band structures. As a result, we obtained more efficient crystal structure for focused ultrasound by layer structured phononic crystal. Experimental verification is our future work.
Department of Materials Processing, Tohoku University, Sendai, Japan
ABSTRACT
Nonlinear ultrasound is the most promising means of evaluating closed cracks whose faces are contacting each other because of residual stress or by oxide films. Specifically, subharmonic waves with half-frequency of input wave are useful because of its excellent selectivity for closed cracks. Thus far, we have developed a novel imaging method, subharmonic phased array for crack evaluation (SPACE), and demonstrated its performance in closed fatigue and stress corrosion cracks. The SPACE provides fundamental (f) and subharmonic (f/2) images by filtering received waveforms at each frequency. However, strong linear scatterers such as coarse grains, weld defects and back surfaces, are sometimes visualized in subharmonic images as a leak of the filter, since short-burst waves are used to obtain high temporal resolution. This artefact might degrade the performance of SPACE to identify closed cracks. To solve this problem, we propose an extension of SPACE as well as another approach using subtraction of responses at different external loads. By applying external static or dynamic loads to closed cracks, the contact state in the cracks varies, resulting in the change in the responses at the cracks. On the other hand, the linear scatterers other than cracks are not influenced by the external loads. Therefore, only the cracks can be extracted by subtraction of responses at different loads. In this study, we performed its fundamental experiments in a closed fatigue crack formed in an aluminium alloy A7075 specimen. Here we utilized static-load dependence of SPACE images and dynamic-load dependence of linear phased array images. As a result, we demonstrated that this method can extract only the variance in the responses at the closed cracks, with cancelling the responses of other than cracks.
Institute of Telecommunications, Teleinformatics and Acoustics, Wroclaw University of Technology, Wroclaw, Poland
ABSTRACT
A number of authors working in the field of medical imaging have recently suggested the use of multielement arrays and investigated methods of selecting the optimal number and distribution of the elements of transmitting and receiving apertures. It is also one of many methods that suppress grating side lobes (caused by aliasing effect) and hence it allows to relax the criterion of the maintaining a suitably small distance (pitch) between the centres of adjacent transducers (less than a half wave length) in the 2-D array, which itself is a difficult requirement to realise in practice. The majority of the 2-D ultrasonic multielement arrays are designed for miniature 3-D volumetric medical endoscopic imaging as intracavital probes providing unique opportunities for guiding surgeries or minimally invasive therapeutic procedures. Most of them are intended for operation using echo method.
This paper offers several schemes of activating a small number of elements in a pair of 2-D transmitting and receiving plane arrays for imaging of the structure of biological media by means of the ultrasonic projection (transmission method). Such aperture synthesis with adequately switched small subarrays (according to the scanning method) allows achieving a significant directivity and the increased ultrasonic wave intensity with an acceptable input electrical impedance decrease. The novel approach in this work is reflected in wave beam profile modelling based on a simple simulation of the spatial distribution of the results of multiplying the transmitted and received ultrasonic wave field as a product of an effective transmitting-receiving aperture, rather than the coarray, the aperture or the point spread function (PSF) used in the echo method. In the end a simulation algorithm was developed and presented and the calculations and measurements of ultrasonic wave field distributions for some essential transmitting-receiving aperture configurations were compared.
(1) College of Indust. Tech., Nihon University, Japan (2) FCG Research Institute Inc., Japan
ABSTRACT
It is important to detect wood-boring insects, not only for the ports but also for homes. A chemical pest control is one of the methods, however we need to consider interference with environmental risk management. In this paper, an ultrasonic vibration was directly supplied from the surface of wood. Half wavelength resonance step horns, made by Duralumin, were designed at the frequencies of 20 kHz, 28 kHz, and 40 kHz. These were driven by a piezoelectric transducer. The tip of the horn was set on the surface of wood, the sinusoidal wave to the transducer was supplied in 3 minutes. A 10 kg weight was put on the horn to add static pressure while the pallet was tested.
We artificially drilled holes (2 mm diameter) into a pallet of Japanese cedar with a cross section of 30 x 10 mm and 500 mm long. The holes were opened in 3 mm depth from the surface of the pallet. The step horn was connected to the side wall of the pallet and the ultrasound was supplied. The next specimen had bite marks. When tested, the horn was connected to the surface of the pallet and the vibration energy on the tip of horn was converted to the thermal energy and was propagated in the pallet, then the temperature of the pallet increased. An infrared thermal video system was used to measure the temperature. Radiant thermal energy was converted to the temperature distribution inside the pallet. We found that the temperature distribution clearly showed the shape of the artificially drilled holes. Moreover, when the pallet having the bite mark was flipped and the temperature distribution was measured up to 40 degrees Celcius, the bite mark was found on the distribution. Therefore, it was determined that this system could be very positive as one of the more environmentally friendly pest control methods.
(1) Department of Physics, Govt. Girls College, India (2) Government P.G.College Dhar, India (3) School of Physics, Devi Ahilya University, Indore, India.
ABSTRACT
Acoustics and Dielectric properties of Borassus Flabellifier BF' wood were carried out by authors at various moisture contents with frequency and temperature. The wood section is taken from male BF tree and female BF tree from where the natural juice is collected over night. Results shows that the moisture content of BF wood affected the dielectric properties considerably. The variation of the relative dielectric constant in different wood structures is correlated with the density variations in a way the received signal strength increases with increasing wood density. Acoustics measurement carried out by Ultrasonic interferometer and the dielectric measuring device used in this study was developed on the basis of new method, allows non- destructive , extremely fast measurements of wood density variations. In this study the drying of wood done by microwave energy using a continuous microwave drier was compared to that by conventional method. Characterization of the samples was made with some standard method like X-Ray diffraction (XRD), scanning electron microscope (SEM) etc..
Laboratory of Ultrasound, Division of Acoustics and Vibration Metrology, Directory of Scientific and Industrial Metrology; National Institute of Metrology, Standardization, and Industrial Quality, Duque de Caxias; Brazil
ABSTRACT
Flowmeters based on ultrasonic time of flight and delay (ToFD) are usual, and represent the most accurate sort of flowmeters for some applications. That is the case for water flow measurements on inlet of hydroelectric power turbines, for instance. As inlet tubes are often large for those kinds of applications, with diameters easily larger than 5 meters, the turbulence due transducer positioning inside water flow is considered irrelevant. Nevertheless, some disturbance, and consequently some uncertainties or errors, should be taken into account. The principle of ToFD for those applications demands pairs of transducers to be positioned through the fluid flow, one of each before the other after regarding the stream direction. The alignment is mechanical and should be done carefully. In this paper it is described a novel way of positioning pairs of transducers in an ultrasonic flowmeter scheme based on bean pattern of multi element transducer. The theory to be presented assures a virtually non-disturbed flow after positioning of transducers, and the alignment is performed easily adjusting ultrasonic parameters of the input signal. An improvement of 50% is expected in the accuracy of such flowmeter schemed for small tubes (diameter less than 1 meter) and from 10% to 20% to larger tuber (diameter larger than 5 meters).
Université de Toulouse PHASE, Toulouse, France
ABSTRACT
Time Domain Topological Energy (TDTE) is a new method of imaging that comes from the field of shape optimization under constraints and corresponds to an approximate resolution of the inverse problem. TDTE has been first developed for Non-Destructive Testing where its performances have shown a better ability for imaging defects in complex materials than classical tools. For acoustic imaging purpose, the rationale utilizes the following steps: an inspected medium is compared to a numerical reference where geometrical and physical properties can be iteratively modified. This comparison is realized using the ultrasonic field recorded by an array of transducers. A forward field is numerically obtained by simulations of the acoustic propagation in the reference medium where properties of velocity and density are chosen to be close to those of the inspected medium. That means the whole ultrasonic field is known inside the whole reference medium and at the location of the transducer array during the recording time. Under the constraint of the wave equation, an adjoint problem coming from an optimization process in time domain leads to a time reversal formulation where a signal difference is time reversed and propagated through the reference medium, giving the complete adjoint field. As the comparison involves the minimization of a cost function, the first term of the asymptotic expression of this function often called topological derivation or topological gradient can be used to draw an acoustical image of the medium. A more stable quantity called "topological energy" is computed by integrating the product of the squared, the forward and the adjoint fields. This modified version of the topological gradient avoids processing an iteration to limit instabilities and to improve the convergence.
Physics Department-CMP College, Allahabd Central University, India
ABSTRACT
Variations and damping of sound waves are dependent upon the material properties of materials and its acoustic inspection has gained significant status more recently in the study of nano materials.The author has studied acoustic attenuation in materials of different kinds such as metals, dielectrics, semiconductors under extreme conditions of temperature and frequency along various crystallographic directions of the propagated wave using theoretical models and Pulse echo technique.Such investigations have revealed that electron-phonon interactions at low temperatures and phonon-phonon interactions in the high temperature domain are the dominant factors contributing towards acoustic attenuation in all types of materials excepting the superconducting transition ones where drastic changes in theoretical models need support.The acoustical investigations were made via phonon gas interactions and thermoelastic factors using ultasonically measured Third Order Elastic Constants.The nonlinear parameters and the absorption coefficients were studied along <100>,<110> and <111> directions of the propagated wave.
Department of Electrical Electronics and Computer Engineering, Chiba Institute of Technology, Narashinoshi, Japan
ABSTRACT
Target ranging methods using ultrasonic pulse-echo are widely employed for remote sensing of automobiles and robots. But the accuracy is not sufficient for measuring the moving speed of the target instantly. Instead, Doppler shift of the frequency is commonly used for speed measurement. However, the lower limit of speed measurement is generally greater than 3m/s.
In order to acquire compression pulse with both high resolution and high signal to noise ratio (SNR), a new method for target ranging with ultrasonic pulse-echo method, using a sensitivity compensated transmitting (SCT) signal derived by inverse filtered processing and a matched filtered pulse compression processing, was proposed. It was verified experimentally that the effective spectrum of received echo signal using SCT signal is flatter and broader, and both the resolution and SNR of the compressed pulse are enhanced than that using normal linear frequency-modulated Chirp wave as transmitting signal.
In this paper, the approach of speed measurement using the SCT signal and pulse compression is studied. Two ultrasonic transducers with 40kHz resonant frequency are employed. First, with a direct transmitting-receiving arrangement of transducers, using a linear frequency-modulated Chirp wave as the transmitting signal, a reference signal, whose spectrum depends mainly on the sensitivities of ultrasonic transducers, is measured. The SCT signal is calculated from the quotient of spectra of the Chirp wave and its reference received signal by inverse filtered processing. Then, a 7cm*7cm steel plate with about 1m distance from the transducers is employed as measuring target. By using the SCT signal, the received echo signal shows an effective flat spectrum between 38kHz to 51kHz, and the compressed pulse shows high resolution with a pulse width of about 1/3 of that using the Chirp wave as transmitting signal. For speed measurement, transmitting signal consisting of two SCT pulses with a time interval of 3.096ms is employed and moving speed lower than 2m/s is measured. The measuring result shows an error less than 5%. Because the error is proportional to the interval of the dual transmitting pulses, higher accuracy can be expected by enhancing the interval of the two SCT pulses. These results indicate a possibility of the application of low speed estimation by using ultrasonic pulse-echo method.
UMI Georgia Tech, George W. Woodruff School of Mechanical Engineering, Metz-Technopole, France
ABSTRACT
The discovery of a backward beam displacement of ultrasound interacting with a periodically corrugated surface, dates back from 1976, when Breazeale and Torbett reported it [M. A. Breazeale and M. Torbett, Appl. Phys. Lett. 29, 456 (1976)]. Since 2002 new investigations have been undertaken partially motivated by Breazeale's enthusiasm. An overview is presented of how the phenomenon was first discovered in 1976, how a theoretical explanation was found since 2002 [N. F. Declercq, J. Degrieck, R. Briers, and O. Leroy, Appl. Phys. Lett. 82, 2533 (2003); Nico F. Declercq, Joris Degrieck, Rudy Briers, Oswald Leroy, "Theory of the backward beam displacement on periodically corrugated surfaces and its relation to leaky Scholte-Stoneley waves", J. Appl. Phys. 96(11), 6869-6877, 2004] and what further verifications and discoveries have been made since then [A. Teklu, M. A. Breazeale, Nico F. Declercq, Roger D. Hasse, Michael S. McPherson, "Backward Displacement of Ultrasonic Waves Reflected from a Periodically Corrugated Interface", J. Appl. Phys. 97(8), 084904 1-4, 2005]. The main focus of this presentation however is on new research showing the ubiquitous presence of the phenomenon and its importance for nondestructive applications [Sarah Herbison, Nico Declercq, and Mack Breazeale, "Angular and frequency spectral analysis of the ultrasonic backward beam displacement on a periodically grooved solid", in press with J. Acoust. Soc. Am., 2009].
Institute of Experimental Physics, University of Gdansk, Gdansk, Poland
ABSTRACT
Professor Mack A. Breazeale attended in few of Spring Schools on Acousto-optics and its Applications (organized every three years by the University of Gdansk since 1980) starting with the first one and participating last time in the 9th one in 2004). His original papers presented during these meetings had an evident importance for the development of this branch of physics and technology. Some recollections of the long cooperation of the authors with Mack Breazeale on the field of acousto-optics and some current results of experiments related to the Breazeale's description of a finite width ultrasonic beam reflection phenomena are presented in this paper. Particularly, some recent results of experiments on secondary interference in the near field of the ultrasonic light diffraction phenomena and of some schlieren pictures of the finite ultrasonic beam reflection (including the Schoch shifted, null zone and backscattered beams) are demonstrated as well.
Department of Physics, Banaras Hindu University, Varanmasi, India
ABSTRACT
Second and third order elastic moduli of fifth group mononitrides (viz. VN, NbN and TaN) have been evaluated using Born model for ionic solids. Using the calculated values of second and third order elastic moduli, temperature dependence of acoustic attenuation for longitudinal and shear modes of propagation along <100>, <110> and <111> directions of propagation have been studied in a wide temperature range (50 K-500 K). Gruneisen parameters, nonlinearity constants and nonlinearity constants ratios have also been calculated for longitudinal and shear waves along different directions of propagation and polarization. Results have been discussed and it has been found that attenuation contribution due to thermoelastic mechanism is negligible compared to phonon-phonon interaction mechanism. It has also been observed that VN, which is hardest among the series has least attenuation, while, TaN which has the least hardness among the members of the transition metal nitride series has highest attenuation.
(1) Industrial Research Ltd, Lower Hutt, New Zealand (2) Victoria University of Wellington, Wellington, New Zealand
ABSTRACT
We present a method to enhance the performance of the soundfield reproduction approach to surround sound technology applicable for reverberant rooms, under the constraint that only a small number of loudspeakers is permissible. The method is based upon the idea of using steerable directional loudspeakers to exploit the room reverberation. In home theatre applications, exact soundfield reproduction is currently handicapped by the unreasonably large numbers of loudspeakers required for operation over audible frequencies. However by exploiting reverberant wall reflections, mirror-sources may be used as additional loudspeakers to help perform the reproduction. Utilizing mirror-sources, the number of loudspeaker locations required throughout the room may be reduced. A large array of omnidirectional loudspeakers can then be replaced by a small number of compact configurable directional loudspeakers. Simulating in a reverberant room with each directional loudspeaker modelled as an array of monopoles, we show that the performance is comparable to a circular array with a much larger number of elements. We quantify the accuracy of the soundfield reproduction and the robustness to calibration error, comparing the proposed scheme with the more standard circular array geometry.
(1) Instituto de Acústica, Consejo Superior de Investigaciones Científicas ,Madrid, Spain (2) Laboratoire de Mécanique et d'Acoustique (LMA), Equipe SACADS, UPR-CNRS 7051, Marseille, France
ABSTRACT
One important topic in the aeronautic and aerospace industries is the reproduction of random pressure field, with prescribed spatial correlation characteristics, in laboratory conditions. In particular, the random-wall pressure fluctuations induced by a Turbulent Boundary Layer (TBL) excitation are a major concern for cabin noise problem, as this excitation has been identified as the dominant contribution in cruise conditions. As in-flight measurements require costly and time-consuming measurement campaigns, the laboratory reproduction has attracted considerable attention in recent years.
Some work has already been carried out for the laboratory simulation of the excitation pressure field for several random fields. It has been found that TBL reproduction is very demanding in terms of number of loudspeakers for correlation length, and it should require a dense and non-uniform arrangement of acoustic sources due to the different spanwise and streamwise correlation lengths involved.
The present study addresses the problem of directly simulating the vibroacoustic response of an aircraft skin panel using a near-field array of suitably driven loudspeakers. It is compared with the use of an array of shakers and piezoelectric actuators. It is shown how the wavenumber filtering capabilities of the panel reduces the number of sources required, thus dramatically enlarging the frequency range over which the TBL vibro-acoustic response is reproduced with accuracy. Direct reconstruction of the TBL-induced panel response is found to be feasible over the hydrodynamic coincidence frequency range using a limited number of actuators driven by optimal signals. It is shown that piezoelectric actuators, which have more practical implementation than shakers, provide a more effective reproduction of the TBL response than near-field loudspeakers.
KAIST, Daejeon, Korea
ABSTRACT
This study proposes a method to reproduce the sound field that we desire in a selected control region by using an array of loudspeakers, both temporally and spatially. The desired sound field means a sound field that we can have in concert halls and stadiums, or that we want to have for special effects in movies and computer games. If a sound field that is identical to the desired field is generated by using loudspeakers, one who is in the field will have the same feeling as that in the desired field. In other words, this study aims to mimic the desired field in a control region so as to make listeners feel as if they were in the desired field. The proposed method uses a scatterer on the surface of which microphones are mounted to measure the surface pressure. Then, by using the measured pressure, the input signals to loudspeakers are obtained that makes the reproduced field identical to the desired field in the control region. In other words, if we put the scatterer in any sound field, and measure the surface pressure, then we can reproduce the sound field by loudspeakers. This method is based on the fact that the pressure on the surface of a scatterer uniquely determines the incident sound field which is generated if the scatterer is not placed. The use of the scatterer enables us to reproduce sound fields without the forbidden frequency problem. This paper proves the fact, and explains and verifies the proposed method with simple examples.
(1) Chiba IT, Narashino-shi, Chiba, Japan (2) MIX Acous. Lab. (3) Shibaura IT, Minato-ku, Tokyo, Japan (4) Self-owned business (5) TOA. Ltd., Kobe-shi, Hyogo, Japan
ABSTRACT
This work gives a new loudspeaker construction which is completely different from the conventional electrodynamic loudspeakers.
Direct-radiator-loudspeaker is required a large diaphragm displacement and low resonant frequency for satisfactory performance at a low frequency range. The conventional electrodynamic transducer is, however practical, not ideal for this sort of loudspeakers because motion of the voice coil driven through an air gap cannot be controlled perfectly and its resonance peak with high Q-factor may result a long group delay time.
The authors proposed an improved driving construction by using revolution of ultrasonic motors for direct-radiator-loudspeakers. It is suitable for radiate a high sound pressure at a low signal frequency. The piezoelectric ultrasonic motor is characterized by excellent motion controllability and high driving mechanical impedance because its rotor contacts its stator tightly. Therefore, the loudspeaker driven by ultrasonic motors expected to operate with large amplitude and high-fidelity in low frequency region even by a heavy diaphragm. However, continuous revolution of ultrasonic motor cannot induce reciprocal motion of the diaphragm directly. Authors tested a loudspeaker by using reciprocal revolution of an ultrasonic motor at the first stage. However, it produced a sound with remarkable distortion due to friction characteristics of the motor. The solution for reduction of distortion by the authors was use of linear motion type motors. The preliminary model included a metal movement, connected directly to a cone radiator and set on a sliding stage, which is driven by two piezoelectric linear actuators fixed opposite to each other. It radiated a satisfactorily large sound. However, its efficiency and distortion characteristics were unsatisfactory. The next model has an improved simple construction. Size and weight of the movement is reduced and the slide stage is removed. Performance of this model will be introduced at the meeting.
(1) National Insitutute of Information and Communications Technology, Japan (2) Graduate School of Engineering, Kyoto University, Kyoto, Japan
ABSTRACT
We propose a 3-D sound reproduction system based on the boundary surface control principle (BoSC system) and evaluate its performance via demonstration and exhibition.
The BoSC reproduction system, dome-shaped and constructed of wood, consists of 62 full-range loudspeakers and eight subwoofer loudspeakers.
The BoSC recording system is designed from ${rm C}_{80}$ fullerene consisting of 70 microphones of a 46-cm diameter.
In the listening room, 62 full-range loudspeakers assisted by the designed inverse filters reproduce sound fields identical to the primary sound fields by reproducing sound pressure on the 70 microphones which surround the listener's head.
The BoSC system requires huge numerical calculation to reproduce authentic 3-D sound fields.
Consequently, a pre-convolution calculation of the inverse filters is required to reproduce and transmit these fields.
Therefore, to realize a real-time 3-D sound field reproduction system, we investigated optimization of the loudspeaker and microphone configuration using Gram-Schmidt orthogonalization.
In the BoSC system, the inverse filters are determined by an inverse system of a transfer function matrix measured between each loudspeaker and microphone pair.
Therefore, a transfer function matrix with a huge condition number degrades the accuracy of the reproduced sound fields.
The selection of loudspeakers in the active control system that includes the BoSC system is equal to the selection of a vertical vector in the transfer function matrix.
This means that for the reduction of the number of loudspeakers the vertical vector is selected up to the required numbers.
By applying Gram-Schmidt orthogonalization to the selection of loudspeakers, the loudspeaker is selected in the order of linear independence from highest to lowest.
In this paper, the effect of the reduction of loudspeakers and microphones is evaluated by the subjective assessment of a sound image localization test.
CARLab (Computing and Audio Research Laboratory), The University of Sydney, NSW, Australia
ABSTRACT
We present the results of an empirical evaluation of a three-dimensional sound field reproduction system consisting of 32 loudspeakers installed in a hemi-anechoic room at the University of Sydney. This loudspeaker arrangement allows up to third-order, two-dimensional, and fourth-order, three-dimensional Higher Order Ambisonic (HOA) reproduction of sound fields. The ability of this system to recreate a known sound field at the ears of a listener is evaluated using measurements with an acoustic manikin in the optimal listening position. In particular, we compare the Interaural Time Delay (ITD) and the Interaural Level Difference (ILD) generated by HOA for different sound source angles against reference values measured in an anechoic room. In addition, the influence of a listener's position on the quality of the reproduction is investigated based on measurements done for different positions of the manikin around the "sweet spot".
University of Southampton, Southampton, UK
ABSTRACT
The problem is addressed of reproducing a desired sound field in the interior of a bounded region of the space, using an array of loudspeakers that exhibit a first order acoustic radiation pattern. Previous work has shown that the computation of the required loudspeakers signals, in the case of omnidirectional transducers, can be determined by solving an equivalent scattering problem. This approach is extended here to the case of directional loudspeakers. It is shown that the loudspeaker complex coefficients can be computed by solving an equivalent scattering problem. These coefficients are given by the normal derivative of the total pressure field (incident field plus scattered field) arising from the scattering of the target field by an object with the shape of the reproduction region (the region bounded by the loudspeaker array) and with impedance boundary conditions. The expression for this impedance, or Robin, boundary condition is calculated from the radiation pattern of the loudspeakers, assuming that the latter can be expressed by a linear combination of a free field Green function and its gradient. The solution of the problem can be obtained in closed form for simple geometries of the loudspeaker array, such as a sphere, a circle or a plane, thus providing a meaningful improvement to sound field reproduction techniques such as Wave Field Synthesis or High Order Ambisonics. The method proposed is also valid for more general geometries, for which the computation of the solution should be performed by applying the Kirchhoff approximation or by means of numerical methods.
(1) Graduate School, The University of Tokyo, Japan (2) Institute of Industrial Science, The University of Tokyo, Japan
ABSTRACT
Swept signals for acoustic measurements are widely used nowadays to obtain impulse responses of the system under test. The overall spectrum and the inverse filter that compresses the sweep into an impulse together with the background noise conditions prescribe the result's signal-noise ratio as a function of frequency. This paper proposes a time-domain sweep synthesis method using composite square and monomial power function modulated sine sweeps that can customize the resulting SNR-frequency function. Theoretical and practical aspects as well as measurement results are presented.
(1) National Insitutute of Information and Communications Technology, Japan (2) Graduate School of Engineering, Kyoto University, Kyoto, Japan
ABSTRACT
A telecommunication system makes communicating more comfortable if it ensures that parties involved in distant communication feel as if they are located in the same space during their conversation. By applying physically accurate sound field reproduction, we aim to develop a telecommunication system which enables us to feel the presence of a conversational partner. In pursuit of physically accurate sound field reproduction, we have developed a sound field reproduction system based on the boundary surface control principle. We have also developed a two-party sound field sharing telecommunication system using that reproduction system.In this paper, we describe an extension of that system to three-party system and conduct the subjective assessment of its voice reproduction. In pursuit of decreasing the amount of real-time convolution calculations, we applied Gram-Schmidt orthogonalization to reduce the number of secondary sound sources. In a three-party conversation, it is important to know ``who talks to whom''. Accordingly, when one of conversational partners turns towards another partner in three-party conversation, we reproduce natural changes in voice directivity caused by head rotation by detecting facing angle through image recognition and by adjusting the voice filter to suit that angle. However, this requires the voice reproduction with enough accuracy to acoustically perceive ``who talks to whom''. Thus, we conducted subjective assessments of the speaker's facing angle both in real environment and in sound reproduction environment.As a result of average angle error in sound reproduction environment, we found out that the system reproduced voice with enough accuracy to perceive who talks to whom.And we also found that there was little difference in the voice facing angle between perception in the real environment and in the sound field reproduction environment for a half of the subjects.
Department of Mechanical Engineering, KAIST, Science Town, Daejeon, Korea
ABSTRACT
For hands-free audio communication, a beamforming has been widely used to focus a broadband signal. Algorithms of conventional beamformers are based on focusing received signals from sensor array toward the location of a target single source. However, in a real situation, a source signal of interest cannot be perfectly magnified because of steering errors and side-lob interferences. This is true especially for the delay-and-sum beamforming. Moreover, conventional beamformers cannot be directly applied when we desire to focus on multiple sound sources inside a zone of interest. There are two design issues of our interest. The first issue is how to control the beamformer's beam-width, from which we can make a beamformer robust to steering errors and we can change zone of interest and listen to any possible sound source located inside the target beam-width loud with small amount of distortions. The second issue is how to effectively reduce the side-lobe level in existence of multiple noise sources. For the case of multiple sources, integrating array gains within desired directions of interests can be applied, and it is a direct approach of using a conventional beamformers. The results are the moving average of a beam power in controlled regions. To extend the ideas of conventional beamforming, a concept of regional focusing is suggested. The algorithm is based on the inverse approach of acoustic contrast control which has been used for designing a desired sound field by using loud speakers. The performance of two, direct approach and inverse problem approach, are compared. To apply proposed algorithms in a practical situation, an array of microphones placed in circular positions is used, and the performance of proposed algorithms is tested.
(1) College of Art, Nihon University, Japan (2) Hitotsubashi University, Japan
ABSTRACT
Our purpose is to create real sound made by human motion in the virtual reality environment. On Japanese traditional dance that is called "Nihon Buyo", the sound of footsteps is very important because dancer makes musical beats by footsteps in his performance. We tried to generate these footsteps using dancers motion for the purpose of producing the virtual reality performance of Nihon Buyo. In the real environment, the material vibration creates sound. This is the fundamental principle of physical phenomena. If we give the something power to the material, it starts to vibrate; this is most simply phenomenon of sound. So we had to create the material vibration in the virtual environment. We tried to generate the footsteps sound by motion capture data. To create the real sound in the computer environment, we have to simulate material vibration by the excitation. If we can simulate the material vibration, we could create same environment as real world in the computer environment. To generate the virtual reality footsteps sound, we used the physical modeling that was the calculation of the modeling elasticity, and moreover we used Finite Element Method (FEM) to simulate the material vibration, which was wood floor vibration.
At first, we measured the floor vibration due to the foot of the dancers to estimate the vibration by the physical model as the vibration of the floor by the real human movement. We put on three places of contact type vibration sensor on the floor and measured it. To make the material vibration, off course, excitation is needed. For footsteps, excitation is just the motion of foot. On this modeling, we used Z-axis motion capture data of performers' heel as data of excitation. About the physical modeling, we made the DSP program that translated the movement of foot to excitation data, based on Modalys that IRCAM developed. And the footsteps sound was generated with dancers motion data and the elastic value of wood. As a result of estimation, it was suggested that there seemed to be the indication of correlation between the real vibration and modeling sound.
Center for Noise and Vibration Control (NOVIC), Korea Advanced Institute of Science and Technology, Yuseong-gu, Daejeon-shi, Korea
ABSTRACT
Sound visualization techniques, which visualize the useful information about sound source such as direction of incident wave from measured signal by directional microphone array, can be applied to visual aids for a hearing impaired person. Beamforming method is a novel way to visualize the sound and is advantageous in rapid realization using fewer microphones. Visual aids are applied as a shape of helmet or glasses, should be considered the effect of scattering by visual aids or user's head. In this paper, we modeled the scatterer as a rigid sphere and then used the beamforming method to estimate the direction of incident wave considering scattered acoustic pressure on the surface of rigid sphere as a bearing function. In addition, the resolution analysis was performed and was compared with the conventional beamforming method.
(1)Pukyong National University, Busan, Korea (2)Tongmyong University, Busan, Korea
ABSTRACT
Piezoelectric bimorph actuator has been used in wide application fields as a sensor, a vibration source, and a position controller. The shape of the bimorph has been changed to improve their characteristics, especially, in case of cantilever type. The tip of AFM (Atomic Force Microscope) is a good example of the cantilever bimorph. However, the analyses of the characteristics are mainly numerical methods because it is hard to solve the wave equation including the shape factor analytically. Therefore, the optimum design of the bimorph is not easy because of the enormous calculation. In this study, to analyze the effect of the shape change, an exponential function is introduced as the shape factor of the bimorph, and then the solution of the wave equation is obtained. The characteristics change with the shape of the piezoelectric bimorph actuator is analyzed theoretically. The exponentially tapered piezoelectric bimorph actuators are fabricated, and the characteristics change are measured experimentally.
(1) Department of Electronics, Faculty of Engineering Science, Kansai University, Japan (2) Hosiden Corporation, Japan
ABSTRACT
In this paper, we propose a method for analyzing compact acoustic reproduction systems (e.g. mobile phones) through acoustic equivalent circuits. Measured responses of compact acoustic reproduction systems cannot be repre-sented accurately by the analysis based on the conventional acoustic theory. Acoustic engineers consequently are obliged to design compact acoustic reproduction systems by trial and error. Moreover, the sound quality of those sys-tems is likely to deteriorate due to the difficulty of such an acoustic design. We therefore clarify the cause of the dif-ference between the measured response and the analysis one calculated by the finite element method (FEM) analysis and consider the possibility of obtaining new acoustic theorical formulae based on the analysis results in order to make it easier for acoustic engineers to design compact acoustic reproduction systems.
(1) Department of Otolaryngology , Poznań University of Medical Sciences, Poznań, Poland (2) Institute of Acoustics, Adam Mickiewicz University, Poznań, Poland
ABSTRACT
This study aimed at defining an optimal acoustic signal, which could be used in sound emitters at blind and visually impaired enabled pedestrian crosswalks. Two signals were identified from among three test groups of tested signals on the basis of psychoacoustic tests (study of detection thresholds of signals in quiet and in traffic noises ) as well as annoyance estimation of signals These two signals met the following standard requirements: (1) TR signal - a signal with a triangular temporal envelope and a sinusoidal carrier with a frequency of 880 Hz, repeated periodically with a frequency of 5 Hz, (2) RC signal - a signal with a rectangular temporal envelope and a rectangular carrier with a basic frequency of 880 Hz, repeated periodically with a frequency of 5 Hz which were used to test the ability of a sound source to localize.
The ability to localize was tested by a modified method ADHA (angle of directional hearing acuity) in which the 2AFC adaptation procedure was used. The test signals were emitted against the background of traffic noise: (i) non-moving and moving cars, (ii) non-moving cars and moving trams and the ratio of the useful signal (65 dB SPL) to the noise (75 dB SPL) - S/N was -10 dB. The tests were conducted on 8 subjects with normal hearing (5 women and 3 men), aged 22-37 years (average 26 years). Statistical analysis of results obtained in the experiments led to the following conclusions: (1) localization is most difficult at the azimuths of 90 and 270; dispersion of results is significant, (2) RC signals are better localized than TR signals, (3) individual subjects differed considerably with respect to ADHA values.
(1) Department of Intermedia Art and Science, Waseda University, Tokyo, Japan (2) Environmental Research Institute, Waseda University, Saitama, Japan (3) National Institute of Information and Communications Technology, Kyoto, Japan
ABSTRACT
Visualizations help us to understand the sound field behavior. A well-known method of sound field visualization is Kunt's experiment which visualizes standing waves using light particles. Comprehending of both accurate and transient information on sound fields requires measurement of information at multiple points, and also their visualization. Microphones are commonly used to implement such measurements, which means that numerous microphones are needed. On the other hand, LDV can be used to measure an average sound pressure over a laser path. We have conducted fundamental research on sound field measurement and visualization using LDV and CT without ordinary microphones. The measured value contains integrated information on the laser path. If we have data on an area measured from all directions using LDV, we can estimate the sound field in the area without having to measure at many points using micro- phones. This kind of signal processing known as CT is based on reconstruction from projections. X-ray CTs are used in medicine to observe cross-section surface of a human body without contact or damage. Similarly we can observe a sound field by using laser CT.
The new technology of observing a sound field and vibration is proposed and being put to practical use in this way. The sound field is generated by a sound source and it is important to know the relationship between sound field and sound source. It is very useful to observe both the sound field and sound source vibration simultaneously, In this paper, we describe the integrated visualization of sound field and sound source vibration using 3D laser measurement method. We used Processing programming language in order to realize the interactive visualization of sound field and sound source vibration. In addition, we conducted an experimental measurement of impulse responses with laser CT and TSP signal.
(1) Department of Mechanical Engineering, Center for Noise and Vibration Control, Korea Advanced Institute of Science and Technology, Korea (2) Graduate School of Culture Technology, Korea Advanced Institute of Science and Technology, Korea
ABSTRACT
Acoustic brightness/contrast control is a method to generate acoustically bright zone (loud region) or acoustically bright zone and dark zone (quiet region) at the same time using several sound sources. For example, in implementing private audio system, it has been demonstrated that acoustic contrast control is one of effective means to maximize the acoustic energy density ratio between acoustically bright zone (listener region) and dark zone (elsewhere). In acoustic brightness /contrast control, measured transfer functions, which show the relation between the input signal of a sound source and the output signal of a microphone, are normally used. If there are errors in the measuring transfer function due to noises, system nonlinearity, or any kinds of disturbances, the desired performance might be distorted. These errors degrade the system performances; for examples, its brightness, contrast and spatial mean-squared-error of the control zone. In this paper, the errors are expressed in terms of magnitudes and phases, and we have formulated the performance variation of acoustic brightness/contrast control due to the errors in measuring transfer functions mathematically and evaluated its validities.
(1) Ph. D. Program in Mechanical and Aeronautical Engg., Feng Chia University, Taichung, Taiwan (ROC) (2) Ph. D. Program in Mech. and Aero. Engg., Feng Chia University and Merry Electronics Co. Ltd., Taichung, Taiwan (ROC) (3) Electroacoustic Graduate Program, Feng Chia University, Taichung, Taiwan (ROC)
ABSTRACT
Sound reproduction in a limited space with accumulation of functions for polyphonic sound is increasing demand of 4C products. Miniature loudspeaker has to generate smooth sound pressure level (SPL) over range of 100 Hz to 10 kHz as per requirements of 4C products. In this study we reports formulation and validation of equivalent circuit model for miniature loudspeaker. This is achieved by measuring electroacoustic (Thiele-Small) parameters and performing anechoic chamber measurements. The validated model is then simulated for investigating the effect of key parameters of miniature loudspeaker based on our past experience. Such parameters are transduction factor, electrical resistance of voice coil, electrical impedance of voice coil, mass of diaphragm, resistance of diaphragm, and compliance of diaphragm. These parameters are adjudged based on TS parameter and their effect on SPL. This study investigates the effect of parameters in two layer manner. In the first layer, diaphragm dependent mechanical parameters and voice coil dependent electrical parameters are isolated and simulation is carried out. We found very promising results. With these results, in second layer, an attempt is extended to combine them to get better information on the effect of these parameters on the SPL of miniature loudspeaker. Finally, an improvement in performance of miniature loudspeaker is obtained for reduction in fundamental resonance frequency, reduction in second resonance peak, increase in the bandwidth, increase in low frequency response and increase in SPL over complete range by careful tuning of these parameters.
National Acoustics Laboratories, NSW, Australia
ABSTRACT
Anecdotal reports associate long-term use of a headband bone-conductor with the potential formation of skin pressure sores. When a bone conductor applies skin pressure that exceeds blood capillary pressure, capillaries collapse and blood flow ceases. Pressure-sores can develop if blood flow is cut-off for extended periods of time. To test if bone conductor users were at risk of developing sores, eleven adults were fitted with headband-worn bone conductors (BC461 and B71). The skin contact-pressure was measured to see if it exceeded 3.7 kPa, the estimated pressure needed to collapse capilaries at the mastoid process. Contact pressure was found to be substantially greater than capillary pressure (17 kPa for a standard adult headband and B71, 11 kPa for a BC461). Pressure can be reduced by increasing contact area so a BC461 bone conductor was modified to attach larger footplates. The measured pressure for a 38 mm footplate diameter was found to produce a contact pressure close to the capillery closure pressure. However, increasing the contact area changes the device mechanical-coupling impedance and sensitivity. Threshold measurements for the larger footplate device showed slightly poorer results. Preliminary measurements indicated decreased skin sensitivity to vibration with larger footplates. To conclude, small footplate bone conductors (e.g., BC461) should be fitted with the least pressure possible commensurate with a bone conductor staying on the head. The skin contact position for the bone conductor should be moved regularly to avoid prolonged disruption of capillary blood supply to the skin. For future devices, it is recommended that bone conductors are designed with larger footplate areas. This reduces skin contact pressure with a small loss in measured thresholds, reduced skin vibration sensitivity and improved wearer comfort.
(1) Hearing CRC, National Acoustic Laboratories, Chatswood, NSW, Australia (2) Macquarie Centre for Cognitive Science, Macquarie University, NSW, Australia
ABSTRACT
Magnetoencephalography (MEG) is a non-invasive technique for 3D brain imaging. This imaging system measures extremely weak magnetic fields surrounding the head that are associated with brain electrical activity. Cortical brain responses to sound can be measured using MEG to detect these magnetic fields. Sound is piped to the subject's ears while observing the magnetic activity surrounding the brain. The sound system used must not produce magnetic fields which interfere with the MEG sensors. For this reason the transducers are located outside the MEG sensor room and pneumatic tubes are used to deliver sound to the subject. The tubes are made of a non-magnetic material and can be up to four metres in length. The use of long pneumatic tubes leads to some loss of high frequency components in the audio signal. The type of transducer chosen to drive the tubes will also affect high frequency signal content. When the sound has a deficit in the high frequency range, it is perceived to be muffled and speech intelligibly suffers. Speech stimuli such as fricatives that contain high frequency content become degraded and the range of useful speech stimuli tests for auditory brain function assessment using MEG becomes restricted. The Speech Intelligibility Standard set out by ANSI requires a minimum high frequency response of 8000 Hz for 100% intelligibility. The frequency response of a new audio system that was developed extends to 9000 Hz. This system is compared with an existing commercial sound system for MEG with a high frequency roll-off occurring at 2000 Hz.
(1) University of Pavia, Via Ferrata 1, 27100 Pavia, Italy (2) Tecnasfalti Srl, Via dell’Industria, 12, loc. Francolino 20080 Carpiano (Mi), Italy
ABSTRACT
Nowadays building conservation and refurbishment draw the attention of the world we live in. In particular, in the public sector, the change of occupancy is commonly used in order to maintain the existing functional layout of spaces and the original structure of the building. Further improvements need to be also considered in order to save the indoor environmental quality. A case study is provided below by the analysis of acoustical performances of an auditorium in Italy, the historical S. Giorgio Palace in Genoa. The palace was built in 1260 and it was the most important public palace in the town; afterwards it became the headquarters of the Port Authority in 1903. Although the high reflective materials covering the interior surfaces provide high values of reverberation time, the hall is mainly used as a conference hall. The acoustical project of restoration, approved by the Ministry of Italian Cultural Heritage, allows only the application of woven materials for floor and curtains, which can be easily removed in case of a change of destination to respect the historical and architectural value of the hall. Acoustical measurements, by means of the impedance tube, have been performed up to now in order to define the best woven materials to improve the overall acoustic performances of the hall. The normal incidence sound absorption coefficient of different samples of carpet have been tested. A procedure for the samples location in impedance tube measurements has been outlined. Carpet is a textile material with a good sound absorption, mainly at high frequencies. In order to improve its acoustic properties at low frequencies a multilayer system composed of carpet and felt having different characteristics have been experimentally investigated and the optimal configuration has been defined.
(1) Chiba IT, Narashinoshi, Japan (2) MIX Acous Lab,Yokosukashi, Japan (3) Shibaura IT, Tokyo, Japan (4) self-owned business, Yokohamashi, Japan (5) TOA Ltd, Koubeshi, Japan
ABSTRACT
This work gives a new loudspeaker construction which is completely different from the conventional electrodynamic loudspeakers. Direct-radiator-loudspeaker is required a large diaphragm displacement and low resonant frequency for satisfactory performance at a low frequency range. The conventional electrodynamic transducer is, however practical, not ideal for for this sort of loudspeakers because motion of the voice coil driven through an air gap cannot be controlled perfectly and its resonance peak with high Q-factor may result a long group delay time. The authors proposed an improved driving construction by using revolution of ultrasonic motors for direct-radiator-loudspeakers. It is suitable for radiate a high sound pressure at a low signal frequency. The piezoelectric ultrasonic motor is characterized by excellent motion controllability and high driving mechanical impedance because its rotor contacts its stator tightly. Therefore, the loudspeaker driven by ultrasonic motors expected to operate with large amplitude and high-fidelity in low frequency region even by a heavy diaphragm. However, continuous revolution of ultrasonic motor cannot induce reciprocal motion of the diaphragm directly. The important invention by the authors was the mechanism for generation of a vibration by revolution of an ultrasonic motor. It was given by use of an ultrasonic motor with a heavy metal block on its shaft. The preliminary model with this construction radiated a satisfactorily large sound. However, use of large metal block prevents reduction of loudspeaker size as well as weight. The authors reduced the heavy ring by using revolution of two ultrasonic motors with a common shaft. A loudspeaker using a driver of this construction, called as DMDS (Dual-Motor-De-Spin) model, shows a satisfactory performance for a direct-radiator-loudspeaker at a frequency range of 30 to 200 Hz.
Faculty of Systems Science and Technology, Akita Prefectural University, Japan
ABSTRACT
One of the purposes for sound emission in public space is to transfer the information involved in it. Since sound wave with the audible frequency has its wavelength comparable to the objects around us, it is difficult to avoid its propagation where it is not required, due to diffraction and reflection. If the information in sound can be conveyed at the desired local spot in the sound field, the communication with sound yields new property beyond its physical limitation. Although the parametric loudspeaker based on ultrasound is useful in order to fulfil such need, it can limit the "direction" of sound propagation, not the local "spot." In this paper, another approach for the reproduction of speech signal at local spotmis introduced. It is based on signal decomposition into the orthogonal basis function made from the random vectors. This approach was applied to the transaural system by Negi et al. It has some difficulties, however, in the reproduction of speech signal at the local spot. One of them is that the contents of speech can be heard from the synthesized signal at the point except the desired spot, although its quality is degraded due to its decomposition into random signals. As far as the target of our study is focused on the reproduction of speech signals, location of the sound sources, by which the decomposed random signals are emitted, is related to the difficulty in understanding of the contents of speech. The performance is not appropriate when the sound sources are located at the same distance from the desired spot. The contents of the synthesized speech can be heard at the point around the desired spot in this case. Location of the sound sources with their distance from the spot distributed has potential to improve the performance. In this paper, the relation between some sound source locations and the synthesized signals based on its decomposition into random signals is discussed via computer simulation, and the synthesized speech signals are demonstrated and evaluated with a few measures.
Waseda University, Tokyo, Japan
ABSTRACT
The parametric loudspeaker has very sharp directivity as the ultra sonic wave speaker, and the sound source is reproduced by non-linearity of the transmission of ultrasonic waves in the air. Recently by using the digital signal processing, some researches establish optimal modulation method for this speaker, and the sound quality has become practicable. At least in Japan, the system begins to be used in various places like the guiding equipments in public facilities etc. On the other hand, we have advanced the research of 1bit high-speed signal processing as analog to digital conversion method. The high-speed 1bit signal includes the spectrum of sounds in bitstream itself, so the signal can drive the speakers directly as digital amp. And because the sampling rate of the method is quite high, it can record ultrasonic signals and be easily constructed phase-controlled speaker array system without up-sampling. In this paper, we propose a parametric loudspeaker system that can be controlled in directivity, and the result of the prototype is shown. The prototype system is made by 576 ultrasonic transducers and driven by individually delayed high-speed 1bit signal. The performance of the directivity control and the method of multi angle output are discussed, and a method of sound reproduction from arbitrary point by using specialized reflection board is described.
Kanazawa Institute of Techonlogy, Japan
ABSTRACT
We can perceive sound localization in stereo reproduction using ordinary left and right loudspeakers. A parametric loudspeaker is sharp directivity and realizes a spot sound reproduction. In this paper, subjective tests were conducted using parametric loudspeakers and ordinary loudspeakers. It was discussed that we perceived sound localization using parametric loudspeakers in comparison with that using ordinary loudspeakers.
In subjective tests, the listening positions were A, B and C. The listening positions A and B were at the top of equilateral triangle whose other tops were the left and right loudspeakers positions. Lengths of the side were 0.6m and 1.8m, respectively. The listening position C was the just in front of the left loudspeaker and in the left direction of the listening position B. The parametric loudspeaker was an equilateral hexagon. The inner and outer diameters were 99 mm and 112 mm, respectively. The acoustic axis of loudspeaker was set in the direction of an ear of subject. IALD (Interaural Level difference) or IATD (Interaural Time difference) was used as binaural information. The IALDs were -0.4, -0.2, 0, +0.2 and +0.4 ms. The IATDs were -12, -6, 0, +6, and +12 dB. The IATDs and IALDs corresponded to five directions from left, center to right. Signals were 500 Hz, 1 kHz, 2 kHz and 4 kHz pure tones. When subjects listened to stereo signals at the listening position C, the level and time differences of signals between left and right loudspeakers were adjusted taking account of the different distances between from left and right loudspeakers to the listening position C. Three young males listened 10 times in each signal condition in an anechoic room.
Subjective tests showed that subjects perceived correct sound localization at the listening positions A, B and C using parametric loudspeakers, which was similar to using ordinary loudspeakers. When signals were 500 Hz and 1 kHz pure tones, both stereo signals with IATD and IALD were effective. However, when signals were 2 kHz and 4 kHz pure tones, stereo signals only with IALD were effective and stereo signals only with LATD were not perceived correctly. Subjects reported that the angle of sound localization between left and right direction using parametric loudspeakers tended to be wider than that using ordinary loudspeakers. It was confirmed that parametric loudspeaker were available in ordinary stereo reproduction and realized a spot stereo reproduction.
(1) Research Institute of Electrical Communication, Tohoku University, Sendai, Japan (2) Graduate School of Information Sciences, Tohoku University, Sendai, Japan (3) Graduate School of Engineering, Tohoku University, Sendai, Japan
ABSTRACT
Ambisonics, a sound field synthesis and reproduction technique, has shown promising results in conveying three-dimensional sound images. The original sound field is expanded as a linear combination of spherical harmonic functions; the coefficients of this expansion are regarded as an encoding of the sound field. In practice, the expansion must be truncated up to some arbitrarily chosen degree, known as the Ambisonic order. An encoding scheme of this nature directly describes the physical variables to be recreated. Therefore, Ambisonic recordings can be said to be independent of the reproduction system. Precise regeneration of the sound field, however, requires a sufficient number of loudspeakers arranged in a regular layout. It is not difficult to find a decoding matrix to reproduce Ambisonic recordings using a spherical, uniform loudspeaker array. Nevertheless, evenly distributing a large number of loudspeakers is unfeasible in most scenarios. The adoption of Ambisonics in actual listening environments depends on the development of decoding systems flexible enough to admit irregular loudspeaker configurations. Partial solutions to this problem have been advanced, such as the so-called Vienna decoders. However, their scope is limited to certain configurations like the 5.1 speaker layout, or restricted to first-order Ambisonics. However, determining the optimal decoding parameters for arbitrary loudspeaker distributions is an NP problem. Furthermore, irregular configurations are known to lead to ill-conditioned and singular decoding matrices. The present research proposes a method to treat high order Ambisonic recordings for accurate reproduction over irregular loudspeaker arrays. The geometry of the target array is decomposed into regular lattices, irregular but symmetric sub-arrays and asymmetrically located loudspeakers. The signals for each subset of loudspeakers are computed according to its type. The Ambisonic stream is properly rescaled and standard decoding matrices are used to generate outputs for the regular lattices. Irregular but symmetric sets of loudspeakers are fed mixed order Ambisonic data streams derived from the original recording. The signals for asymmetrically located loudspeakers result from a pair of space inversions and time evolution operations acting on the sound field described by the Ambisonic data. Asymmetrically located loudspeakers are, therefore, driven in a manner similar to that of wave field synthesis. The proposed decoding scheme was evaluated using an irregular 157-channel loudspeaker array. Comparisons with other decoding methods were conducted. The proposed scheme results in slightly degraded accuracy at the center, but an overall increase in the size of the sweet spot.
Department of Telecommunications, Széchenyi István University, Győr, Hungary
ABSTRACT
Human Head-Related Transfer Functions describe the transmission from the free-field to the eardrums. HRTFs are measured on human subjects or on dummy-heads, characterized by the angle of incidence. The dummy-head measurement method allows the acquisition of data in high spatial resolution. Our setup provided HRTF data in 1 degree horizontal and 5 degrees elevational steps in different environmental settings. Spectral evaluation in spatial hearing research requires proper representation methods of detailed measurement data. Different 2D and 3D representation methods will be presented here, using different coordinate systems, color maps and additional filtering methods programmed under MATLAB. Figures are mainly helpful for HRTF analysis but MATLAB features allow other use for applications where directional characteristics, polar plots are required. Furthermore, spectral differences in HRTF magnitudes from above and below the horizontal plane will be presented using hair, clothing, and glasses applied on the dummy-head.
(1) National Semiconductor, Santa Clara, CA, USA (2) Stanford University, Stanford, CA, USA
ABSTRACT
Capacitive ultrasonic transducers offer good sensitivity and wide bandwidth for airborne ultrasound applications. Capacitive micromachined ultrasonic transducers (CMUTs), which are fabricated using MEMS techniques, offer additional advantages including good control over material properties and feature size, permanently attached plates (membranes), and the option of vacuum-sealed cavities. This work describes the modeling, design, and design verification of CMUTs capable of generating high intensity 50-kHz ultrasound for applications such as generating directional audio using a parametric array. We described the device fabrication at the 2008 IEEE Ultrasonics Symposium. An equivalent spring and mass model describes the dynamic behavior of the CMUT plate. We assume the radiation impedance is the sole source of damping. We choose the plate dimensions for a desired resonance frequency and mechanical Q. Higher Q can give better sensitivity but less bandwidth. The parametric array application requires a center frequency of about 50 kHz and several kilohertz of bandwidth. The most important performance metric is the total source pressure for a given dc bias voltage and ac excitation.
The deflection due to atmospheric pressure is an important design consideration. We show that for Q below ~50 the static deflection causes in-plane stress (tensile forces described by a cubic spring constant) to dominate the deflection. We limit our designs to those that operate in the linear spring constant regime. Even designs with a Q of 100 offer sufficient bandwidth for the parametric array. Increasing Q and decreasing the gap size improves sensitivity but leaves less room for dynamic displacement. A better approach is to size the gap such that for given ac and dc voltages, the dynamic plate displacement equals a large percentage of the gap. Using this approach we show that the maximum pressure increases approximately with the product of the ac and dc voltages raised to the 1/3 power. For a 300-V dc bias and 300-V-peak ac excitation, this analysis estimates a maximum pressure of about 146 dB re 20 uPa at 50-kHz. We fabricated devices optimized for a 300-V ac and 300-V dc bias voltage. From impedance measurements and laser-interferometer velocity measurements, we extract the electromechanical model parameters that best fit the fabricated devices. We compare these model parameters with those in the design. This comparison shows good agreement for capacitance, equivalent spring-constant and mass but larger-than-expected damping. The increased damping is likely due to losses in addition to radiation impedance.
Kobayasi Institute of Physical Research, Japan
ABSTRACT
Electroacoustic transducers utilizing the piezoelectric d33 coefficient of cellular polypropylene electrets generally employ two processes, corona charging and expansion of the voids to increase the piezoelectric constant. However, earlier works noted instability of the transducer performance due to changes in the material structure because the second process applied to expand the voids and required a stacked structure. Additionally, the transducers should be driven by a few hundreds of volts to generate a sufficient sound-pressure level. The effective frequency band became narrower with the increased number of stacked sheets. Therefore, consumer applications were limited.
This paper describes ultrasonic transducers of porous polypropylene electrets that exhibit 250 to 350 pC/N of d33 without applying second process. First, the piezoelectric constant of the sample was evaluated by the dielectric-resonance method. The relationship between the sensitivity or efficiency of a transducer utilizing the sample and the d33 constant was studied. Next, in order to realize a robust transducer, a low-voltage drive should be possible for transmitters, and flat frequency characteristics with high sensitivity for receivers should be realized in a package with simple structure. Transmitters and receivers are designed experimentally. The material that stabilizes the piezoelectric d33-coefficient of 250 to 350 pC/N is estimated to determine the optimal frequency band and driving method. This paper will also report temperature stability as utilization research and application in an airborne ultrasonic range.
Institute of Acoustics, Chinese Academy of Sciences, Beijing, China
ABSTRACT
Head-related transfer function (HRTF) describes a transfer function from the sound source to the listener's ears and plays a central role in binaural spatial and virtual hearing studies. Measuring HRTF requires rigorous experimental conditions and specially designed equipments, and the procedure becomes very time consuming and tiring for the participants. In this paper a fast HRTF measurement method is presented. By multi-point simultaneous measurement using a loudspeaker array, rigorous acoustical conditions and special equipments are not required and the needed HRTFs of a subject are rapidly measured as well as its head and position information. Quality of the measured HRTF is also evaluated. Experiments in an ordinary room demonstrated its effectiveness.
Communication Acoustics, Dresden University of Technology, Germany
ABSTRACT
Modern hand prostheses are capable of mimicking finger or wrist movements using electrical motors, gear mechanisms and control elements. The drive systems and the transmission mechanisms emit machinery noises which can activate or enforce a foreign body sensation by users. Transfer-path-analysis is originally developed to investigate the individual vehicle noise paths and an useful tool for troubleshooting and optimization of the product sound quality. In this study, the vibroacoustical characteristics of a hand prostheses are investigated using the transfer-path-analysis. First of all, the acoustical behaviour of the electrical motor is investigated. The electrical motor is coupled with supporting structures to cause mechanical vibrations of the housing, which, in some instances, radiates acoustical energy. Therefore sound and vibration measurements are carried out to determine the airborne and the structureborne transfer characteristics of the structural elements of the hand prostheses.
According to Jekosch the perceived quality of an entity results from the judgment of the perceived characteristics of an entity in comparison to its desired/expected characteristics. Therefore an interview with prostheses users is conducted to characterize the user expectations regarding prostheses sounds. A psychoacoustical experiment is carried out to evaluate the sounds regarding their reliability and pleasantness. The results of this study convey useful constructive design ideas for hand prostheses concerning the sound quality.
(1) Institut de recherche Robert-Sauvé en santé et en sécurité du travail (IRSST), Montreal, Canada (2) Université de Sherbrooke, Sherbrooke, Canada
ABSTRACT
Sandwich constructions such as composite skins-Honeycomb core (NIDA) and Metal skins-Polymer core (MPM) panels are increasingly used in the aeronautics and automobile industries, respectively. It has been shown that this class of constructions enables manufacturers to cut weight and cost while providing vibration and harshness performance. These materials lead however to increased sound radiation which unfortunately leads, in some instances, to higher interior noise levels. In consequence, there is a need for accurate and reliable low cost numerical tools to efficiently estimate and optimize the vibroacoustic behaviours of such structures. This paper deals with the prediction of the vibroacoustic behaviour of curved orthotropic sandwich panels. A sandwich finite element is first presented and its ability to predict accurately and efficiently the structural response for such structures demonstrated by comparison with classical 3D solid modeling. Next, the element is used within a mixed boundary element/finite element approach (BEM/FEM) to illustrate the effect of curvature and orthotropy on the airborne sound transmission performance of these panels. Examples will consist of both sandwich honeycombs and MPM panels.
Department of Mechanical Engineering, National Chiao-Tung University, Taiwan
ABSTRACT
This paper is focused on optimization of piezoelectric panel speakers. Two piezoelectric ceramic plates serve to excite the diaphragm in the speaker. With the optimization procedure, the best position to mount piezoelectric ceramic plates on the diaphragm is determined. A finite element model is established using the energy method, where the electrical system, mechanical system and acoustic loading of the transducer are considered as a coupled system. The simulated annealing (SA) algorithm is exploited to attain low fundamental resonance frequency and high the acoustic output. Experiments were conducted to verify the numerical model. The experimental results were in good agreement with the numerical prediction. The performance of the optimized configuration was significantly improved over the non-optimal design.
(1) Acoustic Designers team Acusticamente.eu, Italy (2) Materials and Natural Resources Dept., University of Trieste, Italy (3) Department of Chemistry Sciences, University of Trieste, Italy
ABSTRACT
Nowadays, efficient thermal insulation is a principal requirement for buildings and, accordingly, huge amounts of insulators are applied in the constructions, particularly for external walls, radiant floor, roofs, etc. Acoustic insulation is another of the most stringent parameters to be taken into account both in the construction of new buildings or their rejuvenation in order to obtain good internal comfort. Notwithstanding these needs, only few materials are marketed which feature reasonably good both thermal and acoustic insulation properties. More often, materials with good sound insulation proprieties show poor thermal efficiency and vice versa. Further, most of materials which have good both acoustic and thermal properties are thicker than either acoustic or thermals insulators which creates technical difficulties in their use, particularly in the buildings renovation. Nano-structured materials such as, for example, microporous or aerogel materials, are characterised by highly efficient thermal insulation power, in spite of their reduced thickness compared to conventional systems. In the present contribution an overview of acoustic and thermal properties of novel microporous materials, developed and tested at the Trieste University Laboratory (Italy), is presented. A comparison with some of the most common resilient layers for floating floors and resilient partitions is also given. In order to analyze their acoustic proprieties, tests on dynamic stiffness, compressibility and compressive creep parameters were carried out.
School of Mechanical and Manufacturing Engineering University of New South Wales (UNSW), Sydney, NSW 2052, Australia
ABSTRACT
This work investigates the use of inertial actuators to actively reduce the sound radiated by a submarine hull under harmonic excitation from the propeller. The axial fluctuating forces from the propeller are tonal at the blade passing frequency. The hull is modelled as a fluid loaded cylindrical shell with ring stiffeners and two equally spaced bulkheads. The cylinder is closed by end plates and conical end caps. The forces from the propeller are transmitted to the hull by a rigid foundation connected to the shaft with a thrust bearing. The actuators are arranged in circumferential arrays and attached to the internal end plates of the hull. Two active control techniques corresponding to active vibration control and active structural acoustic control are implemented to attenuate the structural and acoustic responses of the submarine. An acoustic transfer function is defined to estimate the far field sound pressure from a single point measurement on the hull. The inertial actuators are shown to provide control forces with a magnitude large enough to reduce the structure-borne sound due to hull vibration.
(1) School of Mechanical and Manufacturing Engineering, University of New South Wales (UNSW) Sydney, NSW 2052, Australia (2) Australian Nuclear Science and Technology Organisation (ANSTO), Menai, NSW 2234, Australia
ABSTRACT
The aim of this work is to model the vibrational behaviour of plates jointed with the technique of the roll swaging. Swage joints are typically found in plate-type fuel assemblies in nuclear reactors. Since they are potentially liable to flow-induced vibrations, it is crucial to be able to predict their dynamic characteristics. It is shown that the contact between the plates resulting from the swage can be modelled assuming a perfect clamp of all the degrees of freedom but the rotational around the axis parallel to the swage. A modal analysis is performed to different specimens and the values of the first natural frequencies are used to find the equivalent torsional spring stiffness by matching the results of a finite element model (FEM).
Parsons Brinckerhoff Australia Pty Ltd, Brisbane, Queensland, Australia
ABSTRACT
The generation of ground-borne noise inside receiver buildings due to nearby construction works can cause significant community impact and impose difficult constraints on construction activities. Several methods exist to predict ground vibration from permanent operational sources; however the suitability of some of these methods to predict ground-borne noise from construction activities are not well known and have not yet been thoroughly tested. This paper presents an overview of some of the various methods available to estimate ground-borne noise levels from construction activities and discusses the advantages and limitations of each.
Laboratory for the Mechanics of Solids UMR 7649, École polytechnique, 91128 Palaiseau Cedex, France
ABSTRACT
The vibrations of the soundboard of an upright piano in playing condition are investigated. It is first shown that the linear part of the response is at least 50 dB above its nonlinear component at normal levels of vibration. Given this essentially linear response, a modal identification is performed in the mid-frequency domain [300-2500] Hz by means of a novel high resolution modal analysis technique (Ege, Boutillon and David, JSV, 2009). The modal density of the spruce board varies between 0.05 and 0.01 modes/Hz and the mean loss factor is found to be approximately 2%. Below 1.1 kHz, the modal density is very close to that of a homogeneous isotropic plate with clamped boundary conditions. Higher in frequency, the soundboard behaves as a set of waveguides defined by the ribs. A numerical determination of the modal shapes by a finite-element method confirms that the waves are localised between the ribs. The dispersion law in the plate above 1.1 kHz is derived from a simple waveguide model. We present how the acoustical coincidence scheme is modified in comparison with that of thin plates. The consequences in terms of radiation of the soundboard in the treble range of the instrument are also discussed.
(1) Instituto de Acústica - CSIC, Madrid, Spain (2) Escuela Técnica Superior de Ingenieros Aeronáuticos -UPM, Spain
ABSTRACT
Finite Element Methods are widely used to model vibro-acoustic systems, but as the modal density becomes higher this type of model becomes inaccurate and impractical. This is why in the high modal density region the use of Statistical Energy Analysis (SEA) models has become increasingly popular. SEA has some obvious advantages such as its simple formal expression, being based on linear equation systems or the reduced number of variables involved. But SEA has drawbacks as well, such as the absence of local information or the necessity of frequency averaging. A key quantity in SEA models is the loss factor. This takes into account the energy dissipated within a given subsystem or when power flows from one subsystem to another.
Even though analytical expressions exist for a number of subsystems of differing nature, the measurement of the loss factor is still advisable and a necessity for a large number of cases. The most commonly used method of measuring loss factors is the Power Injection Method. This method is based on the injection of power into every single subsystem in sequence while the energy in each subsystem is measured. In spite of its simplicity, there remain a number of problems where the accuracy of the results is influenced by various practical issues. In this paper, a Monte Carlo model is used to describe the uncertainty of a two subsystem-problem consisting of two planar elements connected along one side. The influence of the input variables is studied and the conditioning of the coefficient matrix that model the system is also taken into account.
(1) SV research associates, Kanagawa, Japan (2) Waseda University, Tokyo, Japan
ABSTRACT
A structure, for example a building, a bridge etc. has degraded since it was built due to a physical damage, for example an earthquake or time-passing by. Detecting such degrading is much important to predict an accident from security point of view and to maintain a structure from economical point of view. Degrading is detectable as changes in dynamic properties of a structure. It is practical to detect changes in frequency responses of a structure with non-stationary vibrations, e.g. natural force of winds. This issue is considered to be identification of transfer function with a non-stationary input signal. The discrete Fourier transform (DFT) is well used to obtain the spectrum of a response of a structure. The amplitude spectrum by DFT is subject to temporal changes in the non-stationary input signal. On this issue, we have a statistical approach in this article. We introduce short-interval period (SIP) to detect the spectrum of the signal. The SIP is defined as distribution of statistical frequency of dominant frequencies of fractions of measured data. Therefore, the SIP is independent from the magnitude of the unknown signal. This paper shows a theory of the SIP and an application to a scale model experiment. In the experiment, the SIP distribution resulted in stable spectrum independent from a sequence of random numbers as a non-stationary input. The SIP is available for the estimation of the dynamic properties for many types of structures.
(1) Fuji Engineering Co., Ltd, Japan (2) Kanazawa University, Japan (3) West Nippon Expressway Co., Ltd, Japan
ABSTRACT
Several complaints arose from houses near the object bridge, with regard to a rattling sound and vibration caused by infrasound and ground vibration, respectively. General trucks in Japan with rear leaf suspension have vibration frequencies of about 3.0 Hz. Also, the frequencies of the tire spring vibration appear at about 10-20Hz. The occurrence of the infrasound and ground vibration radiated from the bridge is related to the truck's suspension spring vibration and/or the tire spring vibration. In this study, examinations for the bridge vibration were conducted using test trucks or ordinary trucks to investigate the cause of the rattling sound and ground vibration. After examination, the truck's spring vibration was causing excessive bending vibration in the object bridge, which in turn, was being transmitted to the houses nearby as infrasound and ground vibration.
(1) Ulsan College, Korea (2) Pusan National University, Korea
ABSTRACT
This paper reports active vibration control of clamped beams using PPF controllers. The control actuator is considered to be a piezoceramic patch. Direct velocity feedback control using the piezoceramic patch is performed and revealed its limitations due to instability problems when it is practically implemented. We then considered positive position feedback (PPF) control to overcome the limitations of instability. We first implement a single mode PPF controller and obtain a significient reduction of vibration at the tuned mode. We also implement a multi-mode PPF controller under single channel control scheme. It follows that a good reduction performance can be obtained at the lowest three or four modes. The presented multimode PPF controller can be suggested for active vibration feedback controller having a large gain margin.
Delft University of Technology, The Netherlands
ABSTRACT
Non-destructive inspection of plates and plate-like objects is often performed by local thickness measurements requiring scanning of the object under investigation. A viable alternative is given by flexural waves, which propagate along the plate and can be used to retrieve information on remote locations by measuring the scattered field. In order to obtain an image of inhomogeneities in the plate, the dispersive characteristics of flexural waves have to be taken into account. The resolution and the quality of the resulting image can be improved by several means. The most important parameters in this context are the frequencies employed for imaging and the positions of sources and receivers with respect to the region to be imaged, especially the covered aperture. On the processing side, further improvement of the quality of the obtained images can be achieved by regularizing the inverse imaging problem using a priori assumptions on the structures to be expected. Results obtained by regularization with maximal sparseness or minimal total variation are presented. As an alternative to these "mathematical" regularization techniques, a more advanced physical model of the scattering can be employed to explain the measured data. Abandoning the Born approximation and including multiple scattering between the defects in the model is shown to lead to images with high resolution and low noise level.
Graduate School of Science and Technology, Faculty Of Engineering, Niigata University, Japan
ABSTRACT
Concrete is useful and indispensable material for the construction of modern buildings. However, it contracts after drying and causes cracks on its surface and in structure body. Moreover, long use period after construction make the crack deeper and wider. These bad states will cause peeling off and falling accidents. To prevent the accident, diagnose of the crack is necessary. Famous detection method for crack depth using ultrasonic wave was not used for inspection of concrete wall with wide area because there were faults with high cost and severe test condition as using grease. Therefore, authors tested other detection techniques for the crack depth, and then we found that the air column resonance being caused by a crack is similar to a wind instrument. It can be understood that the length of 1/4 of the wavelength of the first order resonance sound wave corresponds almost to the depth of air column with one end closed and the length of 1/2 of wavelength also to the depth with both ends open. We propose the method of non-destructive detecting crack depth based on the resonance phenomena. It has been understood that it would be able to detect for the width within 1mm on concrete wall in typical building by the spectral analysis peak at frequency of several hundred hertz. Under assumption of expanded use for special concrete constructions with very thick wall, we made some detection experiments by the addition. If depth increased, and width narrows, it became difficult to detect by the first order resonance as at frequency lower than one hundred hertz. Fortunately, higher order resonance phenomena, as at frequency several hundred hertz, were found and were able to be clearly caught in the experiments. It is possible to detect even by deep depth of about 1m with width of about 1 and 2mm. In the paper, principle and measurement ways of diagnosis based on the sound resonance are introduced, and the examination results are described.
Hanyang University, Seoul, Korea
ABSTRACT
The effects of resilient isolators and viscoelastic damping materials on reducing heavyweight floor impact sounds in reinforced concrete structures were investigated using FEM simulations and field measurements. Dynamic properties of the materials were also measured with beam transfer function method to predict vibration characteristics and impedance of the floor structures; results showed that damping materials had larger loss factor and dynamic elastic modulus than resilient isolators. From the field measurement, it was found that the natural frequency of the floor structure increased and its vibration acceleration level decreased by the use of damping materials (heavyweight impact sound levels also decreased below 80 Hz), whereas the sound levels in the structure increased in the same range with the use of resilient isolators.
Department of Mechanics and Vibroacoustics, AGH University of Science and Technology, Kraków, Poland
ABSTRACT
The paper is an experimental study of elastomer layer impact on rectangular, steel plate radiation and transmission loss. Two plates are examined; steel plate with bonded PZTs and second with additional porous elastomer layer. The experimental set-up consist of two chambers with the test opening in between. Sending chamber has reverberant field conditions and receiving chamber is semi-anechoic. Measurements of plate vibration and sound radiation under stepped harmonic force by piezoceramics and sound wave by speaker are performed.
The aim of the paper is to illustrate how additional damping in the form of rubber layer induce on active control of plate vibrations. By changing the stiffness of test plate and characteristic of its sound radiation, transmission loss increases. Measurements for the research are carried out using two methods of excitation, mechanical (PZT) and acoustic (loudspeaker). A microphone inside semi-anechoic chamber is used to measure sound radiation changes after active control of the test plate via piezoelements. Active control of test plate vibration is performed using Labview environment and data acquisition system with voltage amplifier. Results show sound radiation reduction for particular frequencies from 10 dB to 28 dB for particular frequencies and about 10 dB for whole spectrum.
(1) The University of Salford, Salford, UK (2) The Open University, Milton Keynes, UK
ABSTRACT
The main spectral property of a Sonic Crystal structure is a notable sound attenuation related to the Bragg band gaps. This effect is observed in air and makes Sonic Crystals effective noise barriers in a particular frequency range. This performance can be extended to a wider range of frequencies by introducing scatterers supporting multiple resonances of various types. In this paper the sonic crystals composed of infinitely long multi-resonant composite scatterers are studied. First the concentric elastic shell and outer 4 slits rigid shell composite is considered. Theoretical and experimental results show the existence of the axisymmetric resonance of the elastic shell followed by the annular cavity resonance. The second type of scatterers considered is a U-shaped resonator composed of thin elastic plates. The plates form an open cavity so that resonances are defined by their bending motion as well as by the geometry of the scatterer. Theoretical analysis of the elastic-acoustic coupling in a single scatterer is based on the Kirchhoff-Love asymptotic theory. Numerical results on the overall performance of the proposed structure are obtained with the finite element method. The predictions are compared with the experimental results.
(1) Mobile Terminals Development Division, NEC Corporation, Kanagawa, Japan (2) System Jisso Research Laboratories, NEC Corporaiton, Kanawaga, Japan (3) Waseda University, Shinjyuku-ku, Tokyo, Japan
ABSTRACT
In this paper, the acoustical design of piezoelectric speakers for the slim size mobile phones is described. Recently, slim size mobile phones have a high demand in the market. However, due to the thickness of electrodynamic speakers, the design flexibility of the phone case has been restricted. The piezoelectric speakers could be one of the solutions to this problem, because their thickness is 1.0mm or less. We have developed a method of acoustic, mechanical and electrical design to accomplish the good acoustic performance with piezoelectric speakers for slim size mobile phone. Piezoelectric speakers have some peculiar characteristics which are different from those of electrodynamic speakers. The problem as follows must be addressed.
1) Optimization of acoustic structure of the mobile phone case around the speaker.
Piezoelectric speakers have excess sound pressure level at high frequency because they have the rigid structure. Generally, speakers should have flat frequency response of sound pressure level. To accomplish the flat frequency response of sound pressure level for piezoelectric speakers, sound pressure level at high frequency must be suppressed by acoustic structure of phone case.
2) Vibration damping of the case.
The slim size mobile phone's case is thinner than those of conventional mobile phone. Therefore, the stiffness of the case is smaller than conventional mobile phones. Thin and low stiffness cases are easy to be vibrated by speakers. The inverse phase sound is radiated from the vibration of the mobile phone case, which grows worse the acoustic characteristics of speakers. Therefore, the vibration of case must somehow be damped.
3) Optimization of the electrical design (analogue circuit, digital signal processing).
Piezoelectric speakers have the capacitive electrical impedance, which is inverse proportion to frequency. To adjust the total acoustic system of piezoelectric speakers, the power dissipation at high frequency must be suppressed, although the high frequency component is important to make loud sound by piezoelectric speakers. Therefore, the high frequency element of input signal must be adjusted depending on its magnitude.
We describe methods and novel ideas to overcome the above issues showing experimental and simulation data.
(1) School of Mechanical Engineering, The University of Western Australia, Crawley, WA, Australia (2) College of Marine, Northwest Polytechnical University, Xi'an, P.R.China
ABSTRACT
Noise reduction (NR) of an acoustical enclosure with flexible boundary walls has been predicted using the Statistical Energy Analysis (SEA) method by several authors. Although it is useful for a rough NR estimation, a large discrepancy often exists between the predicted and measured NR levels. Moreover, some physical mechanisms which may affect NR prediction were not addressed in the existing SEA models. The sources of the discrepancy were identified by investigating the limitation of SEA for system energy transfer in the entire frequency range of noise transmission, and the effect of enclosure wall coupling and sound-structural coupling on the NR and its prediction accuracy. This paper presents a modified SEA model, which includes the non-resonant response and more accurate transmission coefficient of finite panels, and compares the model prediction with experimental results. A reasonable agreement between the prediction and experiment was observed.
Ecole Polytechnique Fédérale de Lausanne, EPFL STI IEL LEMA, Lausanne, Switzerland
ABSTRACT
Variable acoustic properties can be obtained on the voicing face of an electroacoustic transducer by very simple control strategies, among which is the electrical shunting of the loudspeaker electric terminals (shortcut, variable electric load, negative resistance disposals, hybrid feedback control). The present paper describes the underlying theory unifying all these impedance control strategies, introducing the general concept of "electroacoustic absorber". Presentations of performances obtained by computational and experimental methods are presented. Among the numerous properties of the novel concept of electroacoustic absorbers, the possibility to tailor dedicated electrical filters with specified transfer functions that accounts for the sensing of actual acoustic quantities in the vicinity of the transducer's voicing face are demonstrated. This specific capability reveals potential applications of such electroacoustic device to act as an actual collocated actuator/sensor for different active noise control strategies. Further discussions on the various concepts are provided, leading to concluding remarks and foreseen future developments.
(1) University of Western Australia, Crawley, WA, Australia (2)Defence Science and Technology Organisation, HMAS Stirling, Rockingham WA, Australia
ABSTRACT
This paper summarizes the first part of results of measured sound radiation from a torpedo-shaped structure under an axial excitation. The structure, built for this study, is two meters in length consisting of a cylindrical shell, a semi-spherical shell at one end and a conical shell at the other. Due to the boundary constrains imposed by the semi-sphere and the cone at the ends of the cylinder, the structure exhibits notable difference in its dynamic behaviour from that of a shear-diaphragm supported cylinder with close-ends. We studied the first 13 structural modes experimentally and then concentrated on the sound radiation from each of those modes in an anechoic chamber structure. Foundings from this experimental work may be used to verify and support the previous analytical and numerical prediction of underwater sound radiation from a submarine hull. They may also find a broader application in noise analysis and control of unmanned underwater vehicle and marine structures.
(1) ETSI Aeronáuticos. Universidad Politécnica de Madrid, Madrid, Spain (2) Instituto de Acústica, Consejo Superior de Investigaciones Científicas Serrano, Madrid, Spain (3) European Space Agency, Noordwijk, The Netherlands (4) Dutch Space B.V., Leiden, The Netherlands
ABSTRACT
One of the primary elements on the space missions is the energy subsystem, whose critical component is the solar array. The behaviour of these elements during the ascent phase of the launch is critical for avoiding damages on the solar panels, which are the primary source of energy for the satellite in its final configuration. The vibro-acoustic response to the sound pressures depends on the solar array size and gap thickness. The stowed configuration of the solar array supposes a multiple system composed of structural elements and the air layers between panels. The effect of the air between panels on the behaviour of the system affects the frequency response of the system not only modifying the natural frequencies of the wings but also as interaction path between the wings of the array. The usual methods to analyze the vibro-acoustic response of structures are the FE and BE methods for the low frequency ranges and the SEA formulation for the high frequency range. The main issue in the later method is, on one hand, select the appropriate subsystems, and, on the other, to identify the parameters of the energetic system: the dissipation and coupling loss factors.
From the experimental point of view, the subsystems parameters can be identified by exciting each of the subsystems and measuring the energy of all the subsystems composing the Solar Array. Although theoretically is possible, in practice it is difficult to apply loads on the air gaps. To analyse this situation, two different approaches can be studied depending on whether the air gaps between the panels are included explicitly in the problem or not. For a particular case of a solar array of three wings in stowed configuration both modelling philosophies are compared. This structural component of a three wing solar arrays in stowed configuration has been tested on an acoustic chamber. The measured data on the solar wings allows, in general, determine the loss factors of the configuration. The paper presents a test description and measured on the Solar Wings, in terms of the acceleration power spectral density. Finally, comparison of simulations with experimental results on spacecraft solar array allows evaluating the performance of each modelling technique when correlated with the experimental results, analyzing the influence on the apparent properties of the system in terms of the SEA loss factors.
Structural Engineering Department, Federal University of Minas Gerais, Brazil
ABSTRACT
The aim of this paper is to investigate the fluid-strucutre interaction behavior of building floating floors simply-supported at points (springs or resilient pads) randomly distributed. A simple hybrid model using the FE method and the Jinc function approach was implemented for the calculation of the sound power radiated by concrete floors. The model predicted the sound power radiated from the direct wave field in the vicinity of the driving point, as well as the sound power radiated from the reverberant wave field. The results were validated against analytical formulation. The analysis has highlighted the main physical characteristics of the sound radiation mechanism. At low frequencies, the power radiated by the near field of the point force has been found to make significant contribution to the total power radiated.
SACADS Research Group, Laboratory of Mechanics and Acoustics (LMA), Marseille, France
ABSTRACT
The acoustic radiation modes, introduced 20 years ago by Borgiotti and mostly used in Active Structural Acoustic Control (ASAC), are a set of velocity distributions that independently contribute to the sound power radiated by a vibrating structure. A key feature is that the radiated sound power is reduced if the contribution of one of the radiation modes is cancelled. Moreover the radiation modes best capture the radiated sound power among all admissible velocity patterns. In this context, a series of simulation and experimental studies has shown that controlling the first radiation modes of a vibrating panel is an efficient ASAC strategy, despite the fact they are frequency-dependent and so, more difficult to sense/excite than the structural modes.
Exact solutions have been found to the radiation modes problem for baffled planar structures. They are sought as velocity distributions with finite spatial support which have maximal energy concentration in a given radiation bandwidth. They satisfy a concentration problem the solutions of which involve prolate spheroidal wave functions which only depend on the structure geometry, on the frequency and on the physical properties of the surrounding fluid. An excellent agreement has been found between the closed-form radiation mode solutions and those calculated from an eigen-decomposition of the radiation resistance matrix. These analytical solutions have been generalized to determine closed-form expressions for the singular value decomposition of integral operators that govern the radiation of baffled planar structures into the far-field or the geometric near-field. These solutions provide further insight into the singular pressure and velocity vectors of the radiation problem. In particular, they provide an indication on the number of degrees of freedom of the radiated field detected above the noise threshold. This corresponds to the number of singular values, that should be accounted for, in the inverse source problem of reconstructing a stable approximation to an unknown boundary velocity from measurement of the radiated pressure field. Current work is focussed on closed-form expressions for the near-field radiation modes of baffled planar structures. They could provide optimal near-field sensing strategies for the design of efficient active noise control systems.
Laboratoire Vibrations-Acoustique, INSA Lyon, Villeurbanne, France
ABSTRACT
The damping effect on the energy flow between an excited structure and a receiving acoustic cavity filled with a heavy fluid will be studied in this paper. In particular, in the high frequency domain, classical Statistical Energy Analysis (SEA) relation describing the energy transmitted between two subsystems indicates that the energy ratio of the two subsystems is independent of the damping loss factor of the excited subsystem. However, this relation is based on a weak coupling assumption which does not hold in the case of a heavy fluid.
Then, we will study the consequence of the non respect of this assumption on the energy flow and the damping effect. The Double Modal Formulation (DMF) is used to describe the fluid-structure coupling from the modes of each uncoupled subsystem. This formulation allows us to study the convergence of the modal series, to determine easily the modal energy of each subsystem and to compare these results with the classical SEA assumptions. We observe that the fluid added mass effect and the non-resonant coupling have a strong effect on the energy flow between the structure and the cavity for frequencies below the critical frequency. As a result, the energy ratio of the two subsystems is not always independent of the damping loss factor of the excited subsystem as it will be shown on an example.
Data Physics Corporation, California, USA
ABSTRACT
Supercalender sections in the paper manufacturing process are arguably among the most prone to severe vibration problems. Supercalenders are machine sections with their own paper reel unwind and rewind stands and are used to impart a very high quality surface finish to paper to enhance its printability. Denim and polymer covered rolls used in supercalender operation are designed to deform in the nip, leading to very difficult to troubleshoot vibration problems. The noise generated in and around the machines when the worst vibration phenomena occur can be intolerable for operating personnel and while a variety of conditions monitoring strategies have been adopted by the industry in general, they have typically failed in the complete diagnostics of supercalender vibration issues. This paper discusses the use of a high performance, multichannel dynamic signal analyzer and methods used towards the complete characterization of supercalender vibration problems at a paper mill, leading to successful reduction in noise levels and significantly increasing component life and enhancing product quality.
(1) Department of Mechanical and Industrial Engineering, Gadjah Mada University, Indonesia (2) PT Pupuk Kaltim, Kalimantan Timur, Indonesia
ABSTRACT
One important factor that affects the productivity of mechanical equipments is their reliability. In this research, the synthesis gas compressor is investigated because most of factory's downtime in Kaltim (East Borneo) Fertilizer Plant is originated from the failure of this component. The objective of this investigation is to find out how the vibration levels of synthesis gas compressors affect their reliability. Two type of synthesis gas compressors were investigated which are System G-1101 and System K-403. System G-1101 consists of 5 sub-systems which are a Low Pressure Turbine (LPT), a High Pressure Turbine (HPT), a Low Pressure Compressor (LPC), a Medium Pressure Compressor (MPC) and a High Pressure Compressor (HPC). Meanwhile, System K-403 has 3 sub-systems which are a Turbine (T), Low Pressure Compressor (LPC) and a High Pressure Compressor (HPC). Each system has different type and configuration of foundation. The reliability of the system is evaluated using Reliability Block Diagram (RBD) making use of historical maintenance and inspection data collected in the last 10 years. From the data, the reliability of System G-1101 and K-403 for mission time of 2 years is found to be 0.07% and 31.81% respectively. Furthermore, from the comprehensive study on the available vibration data from each sub-system, it was found a strong correlation between the vibration level of each sub-systems and the reliability of the systems, i.e. the reliability decreases as the vibration level increases. Furthermore, most of System G-1101 sub-systems have vibration level above of 20m, which is considered to be beyond the recommended level. This fact is due to poor supporting or foundation type of System G-1101. System G-1101 has frame type foundation while System K-403 has block type foundation. This is confirmed by the polar and bode plot of systems' vibration.
(1) Gerencia de Control e Instrumentación, Instituto de Investigaciones Eléctricas, Mexico (2) School of Chemical Engineering and Analytical Science, The University of Manchester, UK (3) DSTL Chemical and Biological Sciences, UK
ABSTRACT
Acoustic streaming induced by ultrasonic vibrations on circular plates in order to control particle clumping position is experimentally invetigated. Three-dimensional Finite Element modelling was employed to predict the acoustic streming fileds produced on circular plates of different lengths. Experimental validations were carried out using glass beads on the surface of the plates, a stepped horn and a piezolelectric bolted 40 kHz transducer. The particles on the plate always accumulated at the displacement nodes, and the agreement between frequencies of models and experiment was very high (<2% frequency error). Particle levitation and formation of clumps was attained by positioning a reflector above the circular plates to obtain resonance in the air. Consequently, the circular plates vibration patterns with patterns of streaming and clump formation were adequately correlated.
(1) Graduate School of Kanagawa University, Yokohama, Japan (2) Department of Mechanical Engineering, Kanagawa University, Yokohama, Japan
ABSTRACT
This paper presents a new structural design concept where the structural intensity technique is used to reduce the sound radiated from a compound plate structure. This concept is based on the modal expansion of the structural inten-sity on a plate. Structural intensity in modal form can be expressed by the superposition of weight coefficients and "cross-modal functions", and the weight coefficients depend on the location of the point excitation coordinates. The cross-modal function is determined by the product of two modes with spatial derivatives and is expressed in a vector field. The modal form of structural intensity gives the desired distribution of structural intensity in terms of changing the weight coefficients (excitation conditions) and/or the cross-modal functions (structural design). The cross-modal functions can be classified into two types of power flow: vortex-type and straight-type distributions. In the case of the vortex-type cross-modal function, the power propagated through the plate is zero because the integral of vortex flow is zero. Then, the modal form of the intensity suggests that enhancement of the vortex-type cross-modal function rela-tive to the intensity leads to less power transmission through the plate. Vortex-type intensity on a plate would be use-ful for interrupting power transmission between plate subsystems of compound plate structures. On the other hand, the straight-type cross-modal functions would be useful for promoting the power transmission. In this study, numeri-cal simulations are carried out to demonstrate the interruption of the power transmission as a result of generating vor-tex-type structural intensity on the middle plate of a three-plate structure (J-shaped structure).
(1) Mobile Terminals Development Division, NEC Corporation, Kanagawa, Japan (2) NEC Saitama Ltd, Kodama-Gun, Saitama, Japan (3) Waseda University, Shinjyuku-ku, Tokyo, Japan
ABSTRACT
In this paper,the acoustical design of mobile phones is described. Recently, a variety of audio functions of the telephone call, the television, the music-playing, and the movie-playing are installed in mobile phones. These functions are achieved with acoustic devices such as receivers, loudspeakers, headphones. And, loudspeakers are the most important device used for a lot of audio functions. We defined the high quality sound of mobile phone’s speaker as flat frequency response, adequate loudness, wide surround, and adequate directivity. And, optimizing the sound hole locations and dimensions of acoustic structures around speakers is necessary for achieving a high quality sound. However, because the speaker is one of the biggest devices in mobile phones, the design factors such as sound hole conditions and cavity volumes of speakers are restricted by the shape of mobile phones. Therefore, we have studied the design factors of the following two items for making of high quality sound in mobile phones. 1) Location of a speaker and a sound hole. This factor rules the directivity of the sound. The directivity of the sound should be adjusted for the user listening point to keep the best in consideration of the characteristic of each frequency band element.
In a high frequency band element, there is a directivity that is sharper than the one of a low frequency band element. For example, if the sound hole is located on the opposite side of display, the tone quality is muffled in high frequency band, and total sound pressure level is decreased at user listening position. Therefore, we have studied the correlation of the sound hole locations and the acoustical properties for the achievement of high quality sound and developed the mobile phones which have a variety of sound hole locations. 2)Dimensions of a sound hole, cavities„ÄÄvolumes„ÄÄof„ÄÄspeakers. This design factor rules the frequency response of the sound pressure level. The structure of the sound causes Helmholtz's resonance. However, the dimensions of a sound hole and cavities volumes of speakers are decreasing because of the miniaturization of the size of the mobile phones, and the frictional loss of air strongly influences the acoustical property.„ÄÄTherefore, we have developed the method of acoustical design that considered the influence of the frictional loss of air by using the hydrodynamics. In this paper, we describe our design method to achieve high„ÄÄquality„ÄÄsound for various mobile phones.
(1) Mobile Terminals Development Division, NEC Corporation, Japan (2)NEC Saitama Ltd., Japan (3)System Jisso Research Laboratories, NEC Corporation, Japan (4)Waseda University, Japan
ABSTRACT
In this paper, the design of novel ultra thin piezoelectric speakers for mobile phones is described. Due to the demand expansion of slim size mobile phones in the market, ultra thin size loudspeakers with high quality sound have been demanded strongly. A piezoelectric speaker has the advantage of thinness to the electrodynamic speakers which used for general mobile phones. However, the rigid structure and the low internal friction must be improved for the realization of high quality sound in piezoelectric speakers. In this study, we have successfully developed the 0.9 mm thick piezoelectric speakers that consist of a piezoelectric bimorph transducer and the elastic support film. And the piezoelectric speakers with high quality sound and low power consumption have been achieved.
1) The piezoelectric bimorph transducer that consists of single layered 40μm ceramics and shim material has been developed. Generally, the piezoelectric transducers that consist of multilayer ceramics are used in piezoelectric speakers. However, the power consumption of the multilayer ceramics is high because of their low impedance. The single layer ceramics have the advantage of high impedance to the multilayer ceramics. We have developed the production process of 40μm ceramics, and the piezoelectric transducers with high efficiency have been achieved.
2) The elastic support structure using a polymer film has been developed. The polymer film is inserted between a shim material that restrains ceramics and a frame. By the effect of the low bending stiffness and the mechanical damping from the elastic supporting film, the vibration of the speaker is amplified in a low frequency range, and the behaviour of the peak near the resonance frequency is corrected to smoothness. Moreover, the durability against drop impact has been improved. In this paper, we also describe the optimization method of the acoustical properties based on our experiment and calculations. The correlation between the acoustical property and design factors, such as, the ceramics size, the thickness of the film has been studied. We have adjusted the acoustical properties in using a correlation diagram and the piezoelectric speakers have achieved high sound quality and low power consumption in mobile phones. The piezoelectric speakers have been practically applied in our ultra thin folding mobile phones. We will show a comparison of characteristics of the sound between a general mobile phone that uses a electrodynamic speaker and ultra thin mobile phone that uses a piezoelectric speaker.
Maritime Platforms Division, Defence Science and Technology Organisation, Victoria, Australia
ABSTRACT
The control of radiated sound is important for many engineering structures. This paper investigates the active con-trol of sound radiation from flat plates through modelling. The active control of sound radiation from the plates is implemented by either rearranging the vibration field or damping the vibration of plate resonant frequencies to re-duce sound radiation.
Two control systems to attenuate sound radiation from the plates are considered. Firstly, a feedforward control sys-tem is studied which can be applied for the case of tonal excitation where a reference signal is available. This con-trol system may be realized by using a feedforward controller with appropriate transducers (actuators and sensors). The control actuators considered provide either a central point force or four corner point forces, and could be piezo-electric or inertial actuators. The error sensor is either a volume velocity sensor or sound power sensor. Secondly, feedback control systems are investigated which can be used in the case of random excitation where a reference signal is not available. This analysis is focused on systems using simple single-channel feedback controllers, so that self-contained, compact and light sensor-controller-actuator devices can be built. Up to sixteen point force actuators with collocated point velocity sensors are controlled in a decentralized fashion by a single-channel fixed gain feed-back control system for each unit. The control effectiveness, stability and robustness of each control configuration are discussed.
This paper considers the behaviour of flat steel plates that are 3 mm thick with dimensions 1440 mm by 710 mm (area 1 m ) and excited by either an acoustic or a structural source. The study has indicated that, feedforward con-trol with four corner point forces provides effective reduction in sound radiation up to 1000 Hz. Decentralized feedback control has shown significant reduction in sound radiation. For either an incident acoustic plane wave or a point force excitation, the random distribution of four or more control units has damped all the resonances up to 1000 Hz which results in overall reductions of sound radiation of over 80%. In general, the magnitude of the sum of the control forces required by the feedback control systems is less than the primary force. As the collocated and compatible transducers are used, each feedback control system is unconditionally stable. Thus, decentralized feed-back control could be a feasible way for implementation in practice. This study will provide a guideline for setting up further flat plates with active control measures for experimental validation.
Laboratoire PHASE, Université Paul Sabatier, Toulouse, France
ABSTRACT
The issue of sound attenuation in multi-layered structures leads to find optimal geometrical stacking of layers in order to minimize vibrations and transmission of waves across the material. Various orientations of unidirectional layers are studied and their acoustic properties are presented using a geometrical simplification. In a first approximation, only bulk waves are considered. They propagate in a multi-layered medium where the celerity changes from one layer to another. This simple model of propagation in homogeneous medium authorizes a fast computation of the transmission coefficient thanks to the transfer matrix formalism. The effect of different geometrical stackings can thus be analyzed. Particularly, self-similar or fractal structures possess topological characteristics combining periodic and disordered ones. Therefore, they present very interesting acoustic properties: resonance and band gaps. Transmission coefficient is studied for classical (simple, periodic, random) and two alternative stackings based on the Cantor set.
School of Mech & Manuf Engineering, University of New South Wales, Kensington, Australia
ABSTRACT
In bearing prognostics it is important to be able to feed back information on the current size of a spall, in order to determine the rate of progress of the fault, and make better estimates of remaining useful life (RUL). A method has recently been developed to measure the time delay between the entry and exit events so as to be able to estimate the fault size. It was found that the two events are quite different, the entry being a step response and the exit an impulse response, with very different strength and frequency content. A range of signal processing techniques were developed to enhance the two signatures so as to better measure the time delay between them, but the estimates were affected to some extent by the processing parameters. In the current paper, the entry and exit events are simulated as modified step and impulse responses with precisely known starting times, so as to be able to determine the effects of various simulation and signal processing parameters on the estimated delay times. One of the ways of determining the delay time is by using the cepstrum to measure the "echo delay time", and already the simulation has been found useful in pointing to artifacts associated with the cepstrum calculation which affect even the simulated signals and have thus prompted modifications of the processing of real signals. The paper presents the results of the study into the effects of simulation parameters such as dominant frequency content, and processing techniques such as optimum choice of wavelets used to choose a frequency band to balance the entry/exit events.
Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, P.R.China
ABSTRACT
The floors on high speed trains are often constructed from composite aluminium extrusions. Noise is a major issue with such configurations, as the internal plate lattice physically bridges the top and bottom panels. The coupling of the panels typically results in poor acoustic performance. In this paper, Finite Element Method (FEM) was used to model the vibratory responses of the extrusion in the low frequency range. Experimentally, the transfer mobilities and vibration energy of the panel were measured for a given mechanical excitation. The direct method was used to estimate the radiation efficiency of the panel. Later studies coupled FEM with Statistical Energy Analysis (SEA) and Boundary Element method (BEM) to predict the sound radiation from the aluminium extrusion to cover higher frequencies.
Graduate School of Science and Engineering, Yamaguchi University, Japan
ABSTRACT
Shell structures are often employed in mechanical structures from viewpoints of light weight and high rigidity. Typical vibration countermeasures in mechanical structures are a direct damping of the vibration energy source and cutting off transmission paths of the main vibration energy. It is necessary to identify vibration transmission paths in either case of them. The Vibration Intensity (VI or Structure Intensity: SI) method is one of the techniques to visualize vibration transmission paths. In the vibration transmission in curved shells, the extensional wave occurs in addition to the flexural wave, which directly affects the noise from the shell. Previous researches reported that the extensional wave little affects the noise but interacts the flexural wave in the curved shell and shows complicated transmission paths of the vibration energy. However, there are still unknown characteristics of vibration transmission paths in the curved shell. The purpose of this research is to elucidate the characteristics of vibration transmission paths in curved shells by VI method. The finite element method analysis was conducted consisting of two flat parts I and II and a curved part between the flat parts. The curvature radius ranged from 20 to 100 mm. The flat part I was excited in the out-plane direction by a sine wave with the amplitude of 1 N. The excitation frequency was varied from 10 to 3000 Hz. The results of the flexural VI and extensional VI show that the power ratio with which the flexural wave transforms to the extensional wave at the boundary from the flat part I to the curved part is greater for smaller curvature radius.
(1) Department of Mechanical Engineering, Isfahan University of Technology, Isfahan, Iran (2) School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore
ABSTRACT
Passive hydraulic engine mounts are broadly applied in the automotive and aerospace applications to isolate the cabin from the engine noise and vibration. The engine mounts are stationed in between the engine and the fuselage in aerospace applications. In fixed wing turbofan engine applications, the notch frequency of each hydraulic engine mount is adjusted to either N1 frequency (engine low speed shaft imbalance excitation frequency) or to N2 frequency (engine high speed shaft imbalance excitation frequency) at the cruise condition. Since most of todays passive hydraulic engine mount designs have only one notch, isolation is only possible at either N1 or at N2, but not at both. In this paper, a novel double-notch passive hydraulic engine mount design is proposed. The new design consists of two inertia tracks. One inertia track contains a tuned vibration absorber (TVA) where the other one does not. This design exhibits two notch frequencies, and therefore can provide vibration and noise isolation at two different frequencies. The notch frequencies of the new design are easily tunable and the notches can be placed at N1 and N2 with ease. The new passive hydraulic engine mount design concept and its mathematical model are presented in details and some discussions on the simulation results are also included.
(1) Graduate School of Tokai University, Hiratsuka, Japan (2) Tokai University, Hiratsuka, Japan
ABSTRACT
Vibroacoustic coupling phenomena occur in a variety of different situations and are generally studied with the goal of controlling noise. However, we also expect that they are applied to new technologies based on energy stored in each system. In this study, we investigate vibroacoustic coupling between structural vibrations and the internal sound fields of thin structures. We consider a cylindrical structure with thin plates at both ends and investigate the coupling between the plate vibrations and the internal sound field when external periodic forces are applied to respective end plates. This coupling is theoretically and experimentally investigated by considering the behavior of the both plates and the acoustic characteristics of the internal sound field with variations in the periodic forces. In the analytical model, the end plates are supported by springs in circumference to make those support conditions close to actual conditions in the experiment due to adjustments of spring stiffnesses, and then the cylinder is assumed to be structurally and acoustically rigid at the lateral wall between the structure and the sound field to simplify this problem. The acoustic characteristics are evaluated by the sound pressure level, which is maximized with changing the phase difference between the both plate vibrations, when the phase difference and relative amplitude between the both periodic forces are varied. The behavior of the plate vibration is studied from changing the phase difference with the cylinder length. In comparison between characteristics of the both systems, it is clarified that vibroacoustic coupling is effective in increasing acoustic energy and the phase difference depends greatly on the acoustic mode, which contributes the formation of the sound field.
(1) Kobe Steel Ltd., Japan (2) Seike University, Tokyo, Japan
ABSTRACT
Structure-borne sound radiation can be reduced by adding sound absorption to the surface of the structure. In this paper, the reduction effect of the sound radiation by adding sound absorption using perforated plate is described. First of all, a feature of the reduction effect of the sound radiation from vibrating rectangular structure by the perforated plate is clarified by experiment. The feature is that the sound radiation is reduced in frequency range above peak frequency of sound absorption of the perforated plate and is increased in frequency range lower than the peak frequency. And then the reduction effect is predicted by self-developed numerical analysis technique. The numerical analysis technique for sound fields including vibrating perforated plate is acoust - structural coupled analysis technique by applying boundary element method to the sound fields and applying finite element method to the perforated plate. The reduction effect by the experiment and the numerical analysis shows good agreement. Through these studies, it is concluded that perforated plate is effective in reducing structure-borne sound radiation and that the self-developed numerical analysis technique is useful for the prediction of the sound radiation from perforated plate and for design of low-noise structure.
Nanyang Technological University, Singapore
ABSTRACT
An adaptive electromechanical tuned vibration absorber (ETVA) is developed consisting of a voice coil and a single degree of freedom mass spring system. The natural frequency of the ETVA is varied and tuned by addition of a variable capacitor. In this paper, the design, mathematical model, and experimental data of the ETVA is presented. Experimental data indicates that the natural frequency of the ETVA can be varied by 66% using capacitive shunting of the voice coil. Analytical studies have also revealed that if the wires of the voice coil could be replaced with room temperature superconductors, the natural frequency of the ETVA can be changed by a factor of 8 using resistive shunting of the voice coil. The ETVA is then used in conjunction with a passive fluid mount to create a variable notch semi active fluid mount. Finally, the mathematical model and simulation results of the ETVA plus the passive fluid mount are described and presented.
Institute of Mathematics, Cracow University of Technology, Cracow, Poland
ABSTRACT
It is well known that piezoelectric elements can be used for the active vibration and noise control or sensing of beams and thin plates. In two-dimensional case a shape of transducer can be included into the control design process for multidimensional structures as an additional parameter. This paper is concerned with the case of rectangular plate with simply supported boundaries and actuators which consist of two identical piezoceramic elements bonded symmetrically to each side of the plate. In this paper, the main focus is on mathematics which is used to describe moments acting upon a structure and induced by arbitrary shaped actuators. The abstract theory of distributions in the sense of Schwartz gives us a useful tool for modeling external forces. In the case of rectangular actuators which are bonded so that their edges are parallel to the plate edges the external loads can be described by tensor product of some distributions. However, in the case of different shaped actuators (e.g. triangles) the definition of tensor product of distributions must be extended. Even thought one of two terms in the product solely depends on x and the second term depends on both x and y, the product can have no real meaning. An easy example can be constructed. Some additional assumptions are needed. In the paper we formulate these assumptions, propose the definition of extension tensor product of distributions and examine some of its properties.
School of Engineering, Edith Cowan University, Joondalup, WA, Australia Email: h.wu@ecu.edu.au Phone: 0415418980
ABSTRACT
Base isolation is found effective in reducing torsional response of structures with mass eccentricity when subjected to earthquakes. In this study, dynamic characteristics of an eccentric five-storey benchmark model, isolated with laminated rubber bearings (LRB) and lead core rubber bearings (LCRB), were examined using a shaker table and four different ground motions. The earthquake-resistant performance of LRB and LCRB isolators was evaluated. It was observed that both transverse and torsional responses were significantly reduced with the addition of an LRB or LCRB isolated system regardless of ground motion input. However, the LRB was identified to be more effective than LCRB in reducing relative torsional angle, model relative displacements, accelerations and angular accelerations, and therefore, provided a better protection of the superstructure and its contents.
College of Power and Energy Engineering, Harbin Engineering University, Harbin, P.R.China
ABSTRACT
A model of coupled rectangular plates with elastically restrained coupling edge by using the Fourier series and Rayleigh-Ritz method is developed to analyze the power transmission and dynamic response, in which the flexural and in-plane vibrations are considered simultaneously in which four sets of springs are added at the coupling edge to describe the mutual interaction of two plates. With the developed model, two main cases are considered, i.e. the coupling edge with a free restraint and the coupling edge with elastically restraints from the ground. For the first case, The contributions and effects of vibrational and internal force components of both flexural and in-plane vibration to the power transmission are investigated numerically in detail. And the effects of the other influential factors on the power flow and dynamic response such as the coupling angle and coupling stiffness are also examined. For the second case, several cases with various restraints from the ground at the coupling edge are investigated. It is shown that the transverse shearing forces, twist moments and in-plane shearing forces have little influence on the power transmission, while the bending moments and the in-plane longitudinal forces play an important role in power transmission. The power flow across the coupling edge is influenced by the coupling angle together with the coupling stiffness. The effects of the coupling angle and stiffness on the dynamic response of the plates are obviously observed. The restraints from the ground at the coupling edge have a significant effect on the dynamic response of the coupled plates.
(1) School of Mechanical Engineering, University of Western Australia, Crawley, WA, Australia (2) DSTO, HMAS Stirling, Rockingham, WA, Australia
ABSTRACT
The underwater sound radiation from layered structure is determined by the properties of elastic waves in the structure and by interaction between structural/structural waves and between structural/fluidal waves. In this paper, a model of a layered infinite plate is used to address how the properties and interaction influence the near field and far field characteristics of the radiated sound. In particular, the effect of structural discontinuity introduced to the layered plate by a finite signal conditioning plate on underwater sound radiation is investigated. The scattering of structural waves in the layered plate by the signal conditioning plate is used to explain the changes in the radiated sound due to the discontinuity.
School of Engineering and Information Technology, University of New South Wales at the Australian Defence Force Academy, Canberra, Australia
ABSTRACT
Composite materials are widely used in aeronautical, marine and automotive industries, because of their excellent mechanical properties, low density and easy of manufacture. However, composite laminates are susceptible to delaminations, which may not be visible externally, but can substantially affect the performance of the structure. The final objective of this research program is to develop a structural health monitoring system based on vibration monitoring, to detect, locate and assess delamination damage in laminated composite structures. Towards this end, finite element modelling is employed to simulate the dynamic response of composite laminates with delamination damage and extract their vibration parameters. This is firstly done to establish how modal frequencies are affected by variations in delamination size, in-plane and through thickness locations and secondly to create a data base which can then be used to develop a methodology to solve the inverse problem, namely, the determination of delamination location and size from measured frequency changes.
Initially, specially orthotropic cantilever beams are modelled, with and without delaminations. The modelling is then extended to beams with other boundary conditions and plates with generally orthotropic symmetric lay-ups. Initially the delamination interface is modelled without contact elements. In this case, the sub-laminates are unlimited by each others’ presence, i.e. free to penetrate each other. Though not physically possible, it allows the natural frequencies to be established quickly, through numerical modal analysis. For more realistic representation, the delamination interface is modelled with surface to surface contact elements, which prevent penetration, but allow separation. The non-linearity introduced by the contact elements essentially renders the model unsuitable for modal analysis. In this case the natural frequencies are extracted by harmonic or transient analysis. The natural frequencies extracted from the current numerical simulation are compared with theoretical formulations and previous numerical studies, as well as results of modal testing conducted on composite beams with delaminations.
Institute of Fundamental Technological Research, Warsaw, Poland
ABSTRACT
A fully-coupled multiphysics modelling is applied for the problem of simultaneous active and passive reduction of noise generated by a thin panel under forced vibration providing many relevant results of various type (noise and vibration levels, necessary voltage for control signals, efficiency of the approach). The active approach is validated experimentally. To this end, the plate of panel is excited in order to generate a noise consisting of significant lower and higher frequency contributions. Then, the low-frequency noisy modes are reduced by actuators in the form of piezoelectric patches glued with epoxy resin in locations chosen optimally thanks to the multiphysics analysis. The emission of higher frequency noise should be attenuated by well-chosen thin layers of porous materials. A fully-coupled finite element system relevant for the problem is derived; such multiphysics approach is accurate: advanced modelling of porous media can be used for porous layers, the piezoelectric patches are modelled according to the fully-coupled electro-mechanical theory of piezoelectricity, the layers of epoxy resin are considered, finally, the acoustic-structure interaction involves modelling of a surrounding sphere of air with the non-reflective boundary conditions applied in order to simulate the conditions found in anechoic chambers. The FE simulation is compared with some experimental results. The sound pressure levels computed in points at different distances from the panel agree excellently with the noise measured in these points. Similarly, the computed voltage amplitudes of controlling signal turn out to be good estimations.
Institut f. Festkoerpermechanik, TU Dresden, Germany
ABSTRACT
Simulation techniques in the linear acoustics of rooms with arbitrary geometry often lack sufficient knowledge about the dynamics of the surrounding walls. But the latter effect the sound distribution significantly. This is why boundary value problems (BVP) of fully coupled structure fluid systems should be solved. Unless one transforms the discretized form of this BVP into a system of only the sound pressure by means of the Schur complement. This produces a fully occupied coupling admittance matrix within this formulation. Out of sound pressure data it is certainly difficult to reproduce all entries of this matrix. Due to this fact the authors introduce an approximation for the coupling admittance by defining local admittance values on the boundary. This boundary condition type causes a simplification of the coupling admittance matrix. It is demonstrated on a simple structure fluid coupled system whose analytical equations are arranged in a matrix form matching a standard BEM-FEM formulation, followed by a short discussion about its applicability.
Department of Design, Development, Environment and Materials, The Open University, Milton Keynes, UK
ABSTRACT
The porous structure and near-surface layering of ground influences propagation of acoustic and seismic pulses originating from above-surface sound sources. Snow cover modifies the acoustical properties and frozen ground adds to the layering effect. A numerical model, Pulse Fast Field Program for Layered Air Ground Systems (PFFLAGS), developed originally (as FFLAGS) for continuous sound sources, is outlined. It is used to fit radial and vertical seismic signals recorded by a geophone and resulting from above-ground explosions over three types of ground including hard' and soft' soil and snow cover. An effective linear source pulse has been determined assuming that non-linear effects are small at the ranges of interest. The resulting deduction of parameters describing the near-surface ground structure is based first on fitting pore-related parameters to the above-ground acoustic waveforms received by micro-phones and then fitting the other parameters including elastic constants and layer dimensions to the vertical and radial components of soil velocity measured by collocated geophones. A similar procedure involving conjunctive use of buried probe microphone and geophone data has been used for fitting acoustic-to-seismic coupling spectra for quarry sand and a dry friable soil. Prospects for using this approach more generally for deducing soil strength, air permeability, moisture content and structure from non-invasive acoustic and seismic measurements are discussed.
(1) Institute of Structural Analysis, Technische Universität Dresden, Dresden, Germany (2) School of Civil and Environmental Engineering, University of New South Wales, Sydney, NSW, Australia
ABSTRACT
This paper is devoted to the numerical modelling of transient exterior acoustics problems. The propagation of acoustic waves in waveguides and infinite domains bounded by a circular or spherical cavity is addressed. These systems can be decoupled into a series of scalar problems using the method of separation of variables. For each mode, high-order doubly asymptotic boundaries for the resulting scalar wave equation are proposed. This is based on a continued-fraction solution of the frequency-dependent modal impedance coefficient, which relates the modal pressure to the modal flux at the near field / far field interface. The continued-fraction solution is transformed into a series of linear equations in the frequency domain by introducing internal variables. This corresponds to a system of first-order differential equations in the time-domain, which completely represents the unbounded medium in a transient analysis.
Numerical experiments demonstrate that both evanescent and propagating modes can be modelled with high accuracy. This leads to stable time-domain solutions, even for long-time simulations. Highly accurate representations can be achieved for arbitrarily high modes. The proposed method is used to study several transient acoustic wave propagation problems in waveguides and infinite domains bounded by a circular cavity.
Monopole Research, Thoudsand Oaks, Ca, USA
ABSTRACT
We describe selected aspects of the development and applications of an elasto-acoustic fast integral solver designed to analyze sound propagation inside a human head, to examine mechanisms of energy transfer to the inner ear through air and a bone-conduction path-ways, and to assess effectiveness of noise-protection devices. The approach uses an integral-equation formulation of acousto-elasticity and overcomes memory and execution time restrictions of conventional methods through the use of a non-lossy Fast Fourier Transform-based matrix compression algorithm parallelized on distributed-memory systems. Such a computational technique dramatically reduces both the storage and the solution time requirements: from O(N3) for the direct solution of a system of matrix equations to approximately O(N log N), where N is the number of unknowns. Effectively, the method allows one to solve the resulting discrete dense linear system representation of the integral equation with computational complexity comparable to that required to solve a sparse system of linear equations. The developed acousto-elastic volumetric integral equation solver is capable of accurate large-scale numerical simulations involving anatomically realistic models of a human head, discretized with several million of tetrahedral elements and characterized by complex geometrical details and large density contrasts. In order to gain confidence in the solver adequacy to handle problems involving highly intricate structures of the middle and inner ear (which are essential for reliable numerical simulations capable of discerning between different mechanisms of energy transfer to the cochlea), we carried out several solver self-consistency tests (involving two different forms of integral equations) and compared its predictions with those following from an analytical solution of field distribution in an elasto-acoustic layered sphere.
We present results of representative numerical simulations of acoustic energy transfer processes to the cochlea for a human head model containing a detailed geometry representation of the outer, middle, and inner ear. The geometry model used consists of: (1) the outer surface of the skin surrounding the skull and containing (2) the outer ear represented by its exterior surface, the surface of the auditory canal, and the tympanic membrane modeled as a finite-thickness surface; (3) the middle ear, consisting of the system of ossicles and supporting structures; (4) the skull, described by external surfaces of the bones constituting the skull and including (5) a set of surfaces representing the inner ear (boundaries of the cochlea, the vestibule, and the semi-circular canals). In addition, as an example of the code applicability to the verification of the effectiveness of noise-protection devices, we present results of numerical simulations for a model of a head protected by a helmet equipped with various material layers filling the space between the helmet and the surface of the head.
Graduate Program in Architectural Acoustics, School of Architecture, Rensselaer Polytechnic Institute, Troy, USA
ABSTRACT
Finite-difference methods are becoming increasingly popular in the acoustics community, and the importance of higher-order methods has been acknowledged. However, the importance of choosing an appropriate source for these methods has been largely overlooked in the acoustics literature. Defining sources influences the accuracy of the scheme, and more importantly for acoustic simulations, defines the frequency range of the scheme. Many authors acknowledge the importance of selecting a continuous function, but they do not consider whether the function's derivatives are also continuous. The error resulting from discontinuous higher-order derivatives can contaminate a finite difference simulation with unnecessary, low-order, dispersive error, diminishing the order of accuracy of the overall scheme. This problem is discussed in the context of simple, wave equation solvers of various order with a variety of sources.
(1) Beuth Hochschule fuer Technik Berlin, University of Applied Sciences, Berlin, Germany (2) Forschungsbereich fuer Wasserschall und Geophysik, Wehrtechnische Dienststelle fuer Schiffe und Marinewaffen, Kiel, Germany
ABSTRACT
The Multi-Level Fast Multipole Method (MLFMM) allows the calculation of the sound scattering from objects with a very high level of discretization. The required calculation time is much less when compared with conventional boundary element methods because the algorithm uses a level-based composition of the potentials from different point sources to acoustic multipoles, which highly accelerates the computation of the matrix-vector-products required. The basic theory and the functionality of the implementation of this algorithm in a parallelized version will be described. The results for direct and iterative BEM-based solution methods with and without use of the MLFMM algorithm will be shown using a scattering rigid body with different levels of discretization (consisting of more than one million elements). The CPU time of the different methods will be compared and the current limits of the algorithm will be discussed.
Department of Mechanical Engineering, University of Bristol, Bristol, UK
ABSTRACT
In this paper, a creep-damaged material is modelled as a two-phase composite material comprising a matrix and a distribution of clustered spherical voids. The voids are dispersed uniformly within oblate ellipsoidal regions that represent preferred regions of voiding close to grain boundaries. In turn, the ellipsoidal regions are distributed randomly in the matrix. A double composite model based on coherent elastic wave propagation is used to determine the effective stiffness and the overall density of the two-phase material. As the creep progresses, the ellipsoid elements are sparsely scattered in the matrix, but they continue to grow in volume, containing more and more voids within them. This evolution results in an anisotropy increase due to the preferential void formation within the ellipsoid elements. Velocity estimates can be used to predict the elastic softening and the development of anisotropy, providing bulk-average in-formation pertinent to the assessment of creep damage.
(1) Department of Mechanical Engineering, University of Bristol, Bristol, UK (2) Université de Bordeaux; CNRS; UMR 5469, Laboratoire de Mécanique Physique, Talence, France
ABSTRACT
Based on a dynamic-homogenization approach, the dispersion spectrum of coherent antiplane waves in an isotropic half-space containing random distribution of strip-like cracks within finite depth beneath the surface is calculated and analyzed. The disorder inside the damaged region is not uniform but depends on depth. The scattering-induced dispersion and attenuation causes the near-surface region to behave as a surface waveguide. As a result, the spectrum resembles that of the Love-waves.
College of Power and Energy Engineering, Harbin Engineering University, P.R.China.
ABSTRACT
Due to the advantage of fast computation and drastic memory saving for solving large-scale problems, the FMBEM has been developed fast in recent years. But it is hard to be employed directly to the acoustic computation of mufflers with complex structures (such as mufflers with extended inlet/outlet tubes or perforated tubes). Two approaches for FMBEM (the substructure FMBEM and the direct mixed-body FMBEM) are investigated and applied to predict the acoustic performance of mufflers in the present paper. For the substructure FMBEM, the interior acoustic domain is divided into several subdomains first, and then the FMBEM is applied to each domain. The direct mixed-body FMBEM may deal with all kinds of complex internal geometries without dividing subdomains, which is achieved by summing up all the integral equations in different zones and then adding the hypersingular integral equations at interfaces. The equations are discretized with constant elements, so the model can be easily created by assembling different surface components together. The transmission loss of expansion chamber mufflers with extended tubes are predicted by using the two approaches and verified by the experimental data. The computational time is compared, and the computational accuracy and efficiency are discussed for the two FMBEM approaches.
George W. Woodruff School of Mechanical Engineering, UMI Georgia Tech, Metz-Technopole, France
ABSTRACT
The plane wave expansion, first developed by Lord Rayleigh and essentially based on a Fourier series, is described in detail. An overview is presented of the history of the (re)introduction of this technique in acoustics in the 1980's and its reappearance to solve current problems in nondestructive testing of materials and interaction of sound with phononic crystals. For instance, one significant difference between periodic surfaces and smooth surfaces is that certain unique features occur in the reflection and transmission spectra obtained from periodic surfaces that do not appear in the spectra resulting from smooth surfaces. In particular, sharp discontinuities occur at certain frequencies in the spectra obtained from periodic surfaces. These discontinuities were first observed experimentally in ultrasonics by Jungman et al. in the early 1980's and were interpreted as being due to mode conversion between bulk and surface waves along the surface. They were named Wood anomalies in reference to the analogous optical phenomena introduced by Wood. Although the classical grating equation successfully described the relationship between surface periodicity, surface wave velocities, and frequency positions of the anomalies, no other theoretical treatment was available at the time that could predict the occurrence of these anomalies in the spectra. It was soon discovered by Claeys and Leroy that the Plane Wave Expansion technique for modeling diffraction on periodic surfaces could accurately predict ultrasonic reflection and transmission spectra obtained from periodic liquid-solid interfaces. Anomalies in the spectra were attributed to the generation of Rayleigh or Scholte-Stoneley waves as a result of diffraction and mode conversion on the surface. More recently a more exotic physical phenomenon, namely a backward beam displacement when sound interacts with the periodically corrugated surface, was explained by means of a combined theory of Plane Wave Expansion and inhomogeneous wave theory. Perhaps the greatest advantage of the Plane Wave Expansion technique is its straightforward applicability and its relatively simple ability to produce amplitude and phase information of diffraction orders.
School of Earth and Ocean Sciences, University of Victoria, Victoria, BC, Canada
ABSTRACT
This paper considers matched-field tracking and track prediction for a moving ocean acoustic source when acoustical properties of the environment (water column and seabed) are not well known. The goal is not simply to estimate source locations but to determine track uncertainty distributions, thereby quantifying the information content of the tracking process. A Bayesian formulation is applied in which source and environmental parameters are considered unknown random variables constrained by noisy acoustic data and by prior information on parameter values (e.g., physical limits for environmental properties) and on inter-parameter relationships (limits on source velocity). Source information is extracted from the posterior probability density (PPD) by integrating over unknown environmental parameters to obtain a time-ordered series of joint marginal probability surfaces over source range and depth. Given the strong nonlinearity of the matched-field problem, marginal PPDs are computed numerically using efficient Markov-chain Monte Carlo (MCMC) methods, including Metropolis-Hastings sampling over environmental parameters (rotated into principal components and applying linearized proposal distributions) and two-dimensional Gibbs sampling over source range and depth. Non-unity sampling temperatures are employed to ensure complete sampling of the parameter space. Bayesian track prediction, in terms of source range-depth probability distributions for future times, is carried out by applying a probabilistic model for source motion to each track realization drawn from the PPD for past locations via MCMC sampling. These track predictions account for both the uncertainty of the source-motion model and the uncertainty in the state of knowledge of past source locations, which is itself dependent on environmental uncertainty. The approach is illustrated using both simulations and acoustic data collected in the Mediterranean Sea, and tracking information content is considered as a function of data quantity (number of time samples and frequencies processed), data quality (signal-to-noise ratio), and the level of prior information on environmental parameters.
The University of Salford, Salford, UK
ABSTRACT
This paper presents a technique to predict the output from an in-house developed 128 channel wavefield synthesis system deployed within realistic room acoustic scenarios. Based on Finite Difference Time Domain the acoustic prediction detailed here utilises ray based voxellisation and a least pth norm filter design approach to import hall data in popular formats that specify complex room geometries and the frequency dependant absorption profiles of surface materials. Using this approach the objective and subjective fidelity of wavefield synthesis within realistic deployment scenarios can be better understood.
(1) Zel Technologies LLC, and NOAA/Earth System Research Laboratory, Boulder, CO, USA (2) NOAA/Earth System Research Laboratory and CIRES, University of Colorado at Boulder, Boulder, CO, USA
ABSTRACT
Interconnection of wave fields in ocean and atmosphere is controlled, to a large degree, by transparency of the water-air interface. This paper investigates reflection and transmission of acoustic-gravity waves (AGWs) at the water-air interface . Assuming constant sound speeds and exponentially stratified mass densities in the two media, general equations for the reflection and transmission coefficients of quasi-plane AGWs are obtained and compared to the Rayleigh transmission and reflection coefficients for acoustic plane waves at a fluid-fluid interface. For quasi-plane waves transporting energy in the vertical direction, it is found that the gravity plays an important role in AGW refraction at the interface only at extremely low frequencies of the order of the acoustic branch cut-off frequency in the atmosphere (~3.3 mHz). Gravity has a more pronounced effect on the wave field due to a point source. In addition to incident and reflected body waves in water and transmitted body wave in air, underwater point source can generate two kinds of surface waves, which propagate along the water-air interface. One of the surface waves resembles the well-known surface gravity wave supported by an incompressible fluid and usually carries most of its energy within water. The other surface wave is similar to the Lamb wave in isothermal atmosphere with over rigid ground, carries most of its energy in air, and propagates with phase speed close to the sound speed in air. Unlike the Lamb wave, the quasi-Lamb wave supported by the air-water interface is cut off at frequencies above about 1 Hz. As an integral measure of AGW transmission through the water-air interface, power fluxes through the interface has been calculated for waves generated by a point, monopole, underwater source. It has been found that the interface becomes anomalously transparent when the point source approaches the interface. A dramatic, O(103) increase in the power transmitted into atmosphere occurs around certain frequency determined by the acoustic branch cut-off frequency in air. Physical mechanisms responsible for the anomalous transparency are discussed.
Universitaet der Bundeswehr Muenchen Fakultaet fuer Luft- und Raumfahrttechnik, Neubiberg, Bavaria 85577, Germany
ABSTRACT
In this talk the numerical simulation of the sound spectrum and the propagation of the acoustic noise inside and around a three-dimensional recorder model is presented. The fluid inside and close to the recorder is meshed by Lagrangian tetrahedral finite elements. Complex conjugated Astley-Leis infinite elements are used to optain results in the far field of the recorder. When playing a recorder, the air column inside the instrument starts to oscillate due to the inserted air flow. The musician is able to influence the frequency of a note by varying the blowing pressure and therewith a fine-tuning of the sound is possible. The sound propagation in fluids with a non-uniform flow can be described by the Galbrun equation. We present the influence of the flow on the eigenfrequencies. Furthermore, it is possible to represent the excitation mechanism for a sound propagation inside and around the recorder with quadrupole sources, which occur in the surroundings of the labium. The numerical results are compared to measurements on the recorder.
G.W.Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA, USA
ABSTRACT
This paper describes the derivation of a set of time domain equations descriptive of the interaction of an arbitrary acoustic domain and a structure that forms all or a portion of the boundary, with the remainder of the bounding surface allowed to be subregions on which the pressure or normal velocity are specified temporal inputs. The primary objective is development of a general semi-analytical method, but it is equally suitable as the foundation for a unified finite element formulation. Prior efforts to use Hamilton's principle as the foundation for such a derivation were hampered by the need to satisfy velocity continuity at the boundary, and to assure that the flow is irrotational. The present formulation considers the pressure at that interface to be a constraining surface traction that imposes kinematical constraint conditions. These conditions are enforced explicitly as auxiliary equations. An extended version of Hamilton's principle governing the particle displacements of the fluid and solid media is derived, in which the surface traction appears as a Lagrange multiplier function. The requirement that the acoustical response be irrotational is addressed by using a Ritz series to describe the velocity potential. The individual terms in this series are products of scalar basis functions and generalized velocities, from which expressions for particle velocity and displacement, and the acoustic pressure are readily extracted. The structural displacement is described by a conventional Ritz series. The surface traction also is represented by a Ritz-like series with a set of basis functions that span the regions of the surface where velocity continuity must be enforced explicitly. The various series are used to describe the mechanical energies and virtual work in Hamilton's principle, from which equations governing the generalized coordinates of both media are derived by application of the calculus of variations. The motion equations are augmented by a set of algebraic kinematical constraint, which are obtained by requiring that the error in satisfying interface velocity continuity be orthogonal to the surface basis functions. Because the assembled set of equations contain derivatives of the generalized coordinates but not of the traction corefficients, they are said to be differential-algebraic type. Their form matches that obtained when the equations of motion for nonholonomic mechanical systems are linearized. The assembled set of equations are shown to be symmetric, and therefore consistent with the fundamental principle of reciprocity.
CIRES, University of Colorado and NOAA/Earth System Research Laboratory, Boulder, CO, USA
ABSTRACT
Wave fields possess significant correlations at ranges that are large compared to the wavelength, even when the waves are generated by random delta-correlated sources. For perfectly diffuse noise, it has been demonstrated theoretically by a number of authors that one can retrieve the exact acoustic Green's function (GF) of an inhomogeneous medium from the noise cross-correlation function (CCF). The connection between GF and CCF suggests using ambient noise as a probing signal to characterize the propagation medium. In acoustic oceanography, passive techniques offer important advantages over remote sensing of ocean with active techniques: low cost (a receiver substitutes a technologically much more complicated transceiver), possibility of noninvasive (in particular, avoiding any harm to marine life) measurements, extremely broad bandwidth, and much longer periods of autonomous operation due to drastic reduction of power consumption.
However, the assumptions necessary to establish an exact algebraic relation between the GF and CCF are rarely valid for ambient noise in the ocean or atmosphere. In this paper, we report results of an asymptotic analysis of the information content of two-point correlation functions of non-isotropic, diffuse ambient noise under realistic assumptions, when there exists no simple, exact, local relation between CCF and deterministic GFs. For noise sources distributed on a surface or in a volume of a moving or motionless, inhomogeneous, lossy fluid, a relation is established between contributions of ray arrivals to CCF and GFs. A similar relation is derived for contributions to CCF and GFs due to adiabatic normal modes in a horizontally inhomogeneous waveguide. The impact of non-uniformity of the spatial distribution of noise sources on accuracy of passive measurements of acoustic travel times is quantified. Effects of finite correlation length of noise sources on GF retrieval from ambient noise are discussed. Our theoretical results demonstrate the potential and indicate limitations of an extension of the highly successful seismic noise interferometry to passive remote sensing of temperature and flow velocity fields in the ocean and atmosphere.
GTM Grup de recerca en Tecnologies Mèdia, La Salle, Universitat Ramon Llull, Barcelona, Spain
ABSTRACT
A link between graph theory and statistical energy analysis (SEA) has been recently established. This allows resorting to the former to solve many issues related to energy transmission paths in SEA models. In this work, we benefit from this connection to implement an algorithm for ranking the set of K maximum energy transmission paths from a source subsystem to a target subsystem, in a SEA model. Problems arising if the stochastic nature of loss factors was to be incorporated in the computation of paths are also outlined. The algorithm can prove very useful for the noise control engineer. For instance, knowing whether energy transmission between sources and targets in a SEA system is drawn by a limited set of paths or not, can be helpful to determine noise control treatments. Moreover, it is at the core of the existence of transmission loss regulations between dwellings.
ESI R&D, San Diego, USA
ABSTRACT
Holes and passthroughs can often have a significant influence on the overall transmission loss (TL) of a trimmed panel, particularly at mid and high frequencies. In order to optimize a given sound package it is therefore necessary to account for holes and passthroughs in a model. In an SEA model the passthroughs can be described using "leaks" applied to various area junctions. The TL of each leak is then calculated using analytical formula (based on circular or rectangular holes). In some instances it is useful to obtain a more detailed model of the TL of the passthrough. This includes, for example, situations in which the passthrough only penetrates certain layers of a multi-layer noise control treatment. In this paper, the use of local Hybrid FE-SEA models with Foam Finite Elements (PEM subsystems) are used to model the TL of partially trimmed passthroughs. The predicted TL can then be used to update a system level SEA model. A number of numerical examples are presented and the results are discussed.
College of Power and Energy Engineering, Harbin Engineering University, Harbin, P.R.China
ABSTRACT
This paper addresses some important issues in structural acoustic interactions. The structural-acoustic coupling characteristics, mechanisms, effect of structural-acoustic coupling on natural mode and natural frequencies of the coupled system are analyzed theoretically and numerically from a new point of view. Certain interesting results are presented, especially regarding the effect of coupling on the modal behaviour of the coupled system. The results show that the strongly coupled system indicates obvious closed-loop feedback characteristics, whereas the weakly coupled system indicates obvious feedforward characteristics, and it is due to the presence of the feedback loop that the natural characteristics and natural frequencies are changed. Cluster coupling characteristics between the structural and acoustic modes for the regular cavity and panel system are revealed, which determines the structural acoustic interactions between the flexible panel and cavity.
NOVIC, KAIST, Korea
ABSTRACT
Time domain acoustic boundary element method (TBEM), which is based on the Kirchhoff integral equation, employs the time marching field integral algorithms. This method would be useful in solving various transient acoustic problems, but it usually suffers from the instability. Errors increase exponentially at each time step and the calculation finally diverges. Various attempts to treat such an instability problem have been attempted, but the onset time of instability is only delayed and the instability still occurs under some conditions. This paper describes our recent effort to stabilize the TBEM calculation by the wave vector filtering. Using the implicit formulation and the state space modeling, TBEM formulation is expressed as a single geometric progression. This approach is applicable to the problems with all types of boundary conditions except the impedance one, which is modeled by the IIR filter. The response wave vector is the time-varying distribution of surface variables within the maximum retarded time. Time domain wave vectors are obtained from the real and imaginary part of eigenvectors of the transfer matrix of geometric progression. Time responses are composed of the sum of wave vector components which are amplified with the magnitudes of their eigenvalues at each time step. As a simulation example, the sound propagation from a point source in a rigid box was taken. Instability occurred by the numerical error in the resonant modes. Stabilization of TBEM calculation could be achieved by excluding wave vectors, of which eigenvalues were larger than one.
National Institute for Mathematical Sciences, Daejeon, Korea
ABSTRACT
A wide range of problems concerning important areas of analytical acoustics are associated with applications of special functions. For instance, Bessel function plays a key role in acoustic problems defined in cylindrical coordinate, and therefore has been enormously studied and used by physicists and engineers as well as mathematicians. In this paper, the author presents a special function called generalized gamma function occurring in mathematical theory of diffraction.
The paper consists of two parts. In Part One, the generalized gamma function is defined in its original form firstly introduced by Kobayashi in 1991. And then, the appearance of this special function in analytical acoustics is briefly explained by formulating the Wiener-Hopf integral equation for a famous diffraction problem by a finite strip or a single slit. Part Two is started with the derivation of a new and exact formula of the generalized gamma function in its specific form occurring in finite diffraction theory. The characteristics of present formula is graphically illustrated and numerically compared with existing two formulas. Firstly, the present formula is compared with the Kobayashi's asymptotic formula with a discussion about the lower bound of available argument yielding the relative error less than 0.0001. Secondly, the present formula is compared with the Srivastava's exact formula(2005) from the viewpoint of computational accuracy and efficiency for large argument. And, finally, the author discuss about the limitation of present formula and the future work concerning a further mathematical improvement as well as the practical applications of the generalized gamma function.
Instituto para la Gestión Integrada de las Zonas Costeras, Universidad Politécnica de Valencia, Spain
ABSTRACT
The plant tissues, in particular the orange's skin, are essentially aqueous-filled structures of varying density and elasticity composed by fat and acids substances, as well as areas of intercellular air. In recent years many techniques have been developed for ultrasonic characterization of fruit and vegetables in postharvest processes. All have found common macroscopic acoustic parameters: a slow propagation speed and a large absorption. It is intended therefore to obtain a prediction model of ultrasounds propagation in this viscoelastic heterogeneous media by FDTD (Finite Difference Time Domain) techniques. Thus, this paper presents a time domain numerical model for simulating acoustic propagation in plant tissue. This model correctly describes the special characteristics of wave propagation experimentally detected in these biological tissues, emphasizing the need for elastic frequency dependent characteristics: a frequency dependent elasticity modulus. These simulation results have been compared with measures obtained from an experimental device to validate the numerical model. Thus, the validation of a model of mechanical wave propagation in heterogeneous media has a great interest because it allows the understanding and development of new techniques for characterization of complex materials, providing tools for predicting propagation processes in this kind of media. In addition, this work is a compilation of different numerical methods for acoustic simulations in heterogeneous viscoelastic tissues, describing its implementation in cartesian, polar, cylindrical and spherical coordinate systems.
The University of New South Wales, NSW, Australia
ABSTRACT
Mufflers are incorporated into continuous positive airway pressure (CPAP) devices to reduce noise in the air paths to and from the flow generating fan. The mufflers are very small, irregularly shaped and are required to attenuate noise up to 10 kHz. The acoustic performance of these predominantly reactive mufflers can be enhanced with the inclusion of dissipative materials. It is important that the acoustic performance of these mufflers is reliably predicted and optimised, in order to improve the user experience and maximise compliance with the CPAP therapy. In this study, the acoustic properties of two polyurethane foams were determined using a two-cavity method. Acoustic models of two muffler designs, having dimensions similar to those used in CPAP devices and incorporating foam-filled regions, have been developed using a commercial finite element analysis software package. Experimental results for the mufflers have been obtained using the two-microphone acoustic pulse method. Results of the transmission loss of the muffler designs obtained from the finite element models are presented and validation of the computational results is discussed.
School of Mechanical and Manufacturing Engineering The University of New South Wales, Sydney 2052, Australia
ABSTRACT
Pipe laggings are used as a means of inhibiting the transmission of sound radiated from pipes. They are usually formed of porous jackets of high flow resistivity and impervious sheets usually made from metals and plastics. The acoustic performance of a lagging system is usually quantified in terms of its frequency dependent insertion loss. Papers in the readily available literature relating to acoustic performance of pipe lagging are generally concerned with presenting experimental results with some prediction models. This paper looks at the merits of the available prediction models of insertion loss associated with the lagging of cylindrical pipes.
(1) Tokyo Metropolitan University, Japan (2) Doshisha University, Japan
ABSTRACT
To date, numerical analysis for sound wave propagation in time domain has been investigated widely as a result of computer development. Acoustic simulation in time domain is an effective technique for the estimation of time-series sound pressure data (e.g., nonlinear acoustic propagation phenomenon). Now, the development of accurate numerical schemes in time domain is an important technical issue. The finite difference time domain (FDTD) method is the most popular scheme used in acoustics. However, we know that, using Yee's leapfrog algorithm, finite difference approximation certainly causes error owing to numerical dispersion. In the past study, the authors have proposed an acoustic simulation technique using generalized constrained interpolation profile method (GCIP method), which is an expanded CIP method. It is a method of characteristic with very high accuracy; i.e., it enables the calculation with less-numerical dispersion. However, this method requires more calculation time than the conventional dispersive schemes. When we analyze large-scale sound wave propagation, the reduction of the calculation time is a necessary requirement. Generally, calculation time and computational cost are proportional to the number of grid points. Additionally, more accurate schemes also require more computational cost. Recently, GPU (Graphic Processing Unit) is used as an acceleration tool for the calculation in various study fields. This movement is called GPGPU (General Purpose computing on GPUs). In the last few years the performance of GPU keeps on improving rapidly. That is, a PC (personal computer) with GPUs might be a personal supercomputer.
This study makes an examination on decreasing the calculation time in acoustic field numerical analysis using GPU. We implement time-domain acoustic simulation (FDTD method, CIP method, and GCIP method) on GPU by CUDA (Compute Unified Device Architecture). We examine suitable algorithm and efficient thread models of CUDA for single- and multi- GPU computing. As a result, it is clarified that the multi-GPU FDTD calculation is 32 times faster than the multi-CPU FDTD calculation. On the other hand, the multi-GPU GCIP calculation needs about 1/100 times less calculation time than the multi-CPU calculation. That is, we obtain 526 GFLOPS performance using 8-GPU parallel computation in 2-D GCIP simulation. Therefore, for large-scale acoustic simulation, these results show the feasibility of high-speed and high-precision simulation analysis by hardware acceleration using multi-GPU calculation.
School of Mechanical Engineering, Pusan National University, Pusan, Korea
ABSTRACT
In this paper, time-domain computational aeroacoustic (CAA) methods are developed to predict broadband noise generation and propagation by a rectilinear cascade of flat plates interacting with ingesting turbulent gust. Utilizing three-dimensional time-domain CAA model, the three-dimensional characteristics of inflow turbulence noise due to the turbulence-cascade interaction are investigated with an emphasis placed on the effects of wavenumber components of ingesting turbulence in the span-wise direction. Through the comparison of three-dimensional results with two-dimensional ones, the characteristics of sound pressure spectrums obtained from the three-dimensional results are revealed, which is mainly due to the different dispersion relations of acoustic waves in two- and three-dimensions. These differences between two- and three-dimensional results become more significant in the lower frequency com-ponents. In addition, some preliminary numerical results on the effects on the broadband noise of a rectilinear cascade of leaned or swept flat plates are illustrated, which can provide basic principle for the low-noise design of the stator in turbo-fan engines.
Ray W. Herrick Laboratories, Purdue University, Indiana, USA
ABSTRACT
The classical Maa theory for microperforated materials was initially formulated for constant diameter cylindrical holes. Since then, a number of ad hoc corrections have been suggested to account for different hole shapes, in particular, rounding of the aperture. Here it is shown that the resistance and reactance of small apertures may be calculated using relatively simple CFD models in which a single hole is modeled. The fluid is assumed to be viscous but incompressible, and the holes are assumed to be axisymmetric. It will be shown that this approach reproduces the classical theory of Maa for circular, sharp-edged apertures. However, it will also be shown that classical theories lack a static end correction which limits their accuracy at low frequencies.
(1) Chemnitz University of Technology, Institute of Lightweight Structures and Sports-Engineering, 09107 Chemnitz, Germany (2) Universitaet der Bundeswehr Muenchen, Faculty of Aerospace Engineering, Institute of Mechanics, 85577 Neubiberg, Germany
ABSTRACT
Improving efficient products and processes, endless fibre-reinforced composites offer a high potential in adjustability of material parameters and aptitude for mass production at the same time. Due to high material costs and complex manufacturing techniques, a widespread application has not started yet. Dynamic properties, however, are essential in requirement specifications for newly developed devices caused by an increasing dynamic and acoustic sensitivity of lightweight structures. The shortage of natural resources demands for efficient and low-cost materials with excellent mechanical properties, both, for dynamic and for static load cases. In this paper, static and dynamic material properties of thermoplastic composites with glass- and carbon-fibre reinforcements are determined. In detail, the investigations are based on the experimental modal analysis of plate specimens to examine the elastic engineering constants and characterise the damping behaviour of the composites. Furthermore, the acoustic analysis, tests of free oscillation of thin beams and Finite-Element-Models of experimental setups are for the validation of the material parameters. The results of all investigations are compared to state-of-the-art metal-based composites with plastic midsurfaces being typically used for reduction of structure born noise and vibration.
University of Salford, UK
ABSTRACT
FDTD has become a popular tool in acoustic modelling in recent years. A main attraction of FDTD is that it can be implemented in computer code easily using simple straightforward marching algorithms and finite difference equations. However, this simplicity comes at a price. Dispersion errors, source scattering, and frequency dependent boundary reflections are just a few of the problems that FDTD has to deal with. Generally the former two are artificial problems of the numerical scheme. The last one, however, is a key component in any room acoustics applications. In theory, the boundary condition can be presented as an impulse response to be convoluted with the FDTD update equations. Unfortunately, this is a rather time consuming process. There are various approximations that can be used to represent a frequency dependent boundary condition in the time domain to speed up the calculation, but their suitability for room acoustics applications has rarely been properly validated. In particular, a practical problem faced in real room acoustics application is that full bandwidth data on a boundary's impedance value is rarely available. In fact, in most cases one may only have information on the absorption coefficient of the boundary in octave frequency bands. Hence it will be of interest to see if an approximation based on the absorption coefficient alone can be used in a FDTD scheme to produce acceptable results. The purpose of this paper is to compare different ways of modelling frequency dependent boundary condition in a FDTD scheme to predict the sound field in a room. In this study, the accuracy of these methods will be validated against calculations by the more accurate boundary element method to assess their applicability in terms of room acoustics criteria.
(1) School of Mechanical Engineering, Pusan National University, Korea (2) Fluid Flow/Acoustics & Vibration Group, Division of Physical Metrology, Korea Research Institute of Standards and Science, Daejeon, Korea
ABSTRACT
In this paper, the aerodynamic noise sources of upwind horizontal-axis wind turbines are experimentally and theoretically
investigated. First, dominant noise sources on the rotor plane of wind turbines are localized by using the beamforming
techniques. These visualized acoustic fields reveal the dominant source locations on the wind turbine. Then,
theoretical predictions for identifying the dominant source locations are made by using the empirical noise prediction
model of Brooks et al. (1989) for the airfoil self noise. Through the comparison of the predicted results with the experimental
data, it is shown that predictions using the formula for laminar boundary layer vortex shedding (LBLVS)
noise do not match the measurements, which urges the need for improving its present empirical prediction formula.
Institute of Modelling and Computation, Hamburg University of Technology, Hamburg, Germany
ABSTRACT
The vibro-acoustic optimisation of technical components gains more and more in importance, as customers are increasingly sensible regarding the impact of noise and vibration on subjective well-being. Especially within vehicle engineering, considerable efforts are made to design and to optimise the vibro-acoustic performance, and the significance of numerical investigations and predictions in this area is continuously increasing. Unfortunately, the vibro-acoustic characteristics of complex technical system are observed to vary in a wide spectrum of several dB for supposedly identical products. Among others, production tolerances and varying environmental and operating conditions have a considerable impact on the vibro-acoustic behaviour, which is essentially based on resonant effects. Additionally, numerical models often suffer from a lack of knowledge regarding single model parameters, which may be caused, among others, by generally limited information or not finally determined specifications in early design stages. A reliable numerical model should be able to reproduce these influences and allow for an estimation and evaluation of the vibro-acoustic variations. Various methods based on probability theory, fuzzy sets, and other approaches exist to take such uncertain parameters into consideration. However, a major drawback of the methods that are most universal and relevant to real-life applications, like Monte Carlo simulation or the fuzzy transformation method, is their need to repeatedly evaluate a basic numerical model, which is continuously manipulated regarding its uncertain parameters. Keeping in mind the traditionally high computational cost of vibro-acoustic finite element models, the calculation of hundreds or even thousands of model evaluations sets a practical limit to the applicability of these methods. As a way out, a novel approach called iterative method for multiple evaluations (IMME) has been formulated, which is a structured methodology to efficiently perform an uncertain finite element analysis based on multiple evaluation runs. Its general philosophy is the fragmentation of the vibro-acoustic system into different structural and fluid substructures with few or no uncertainties. To evaluate the overall system, the different substructures are reconnected by component mode synthesis and an iterative coupling of fluid and structural parts.
The high advantage of the IMME for uncertain vibro-acoustic analysis regarding computation time is exemplified for representative investigations of different components of an aircraft cabin. Depending especially on the system size, the number of evaluation runs, and the number of examined load cases, a decrease of the overall computation time down to about 1% of the direct-coupled solution is possible.
(1) Department of Physics and Mathematics, University of Eastern Finland, Kuopio Campus, Kuopio, Finland (2) Department of Mathematical Sciences, University of Delaware, Newark, USA
ABSTRACT
Wave modeling at high frequencies is usually time consuming and computationally demanding, especially, for the elastic wave problems. For example, in the standard polynomial finite element method (FEM) dense meshes are needed to approximate the elastic wave problems accurately. Therefore, it is needed to develop more efficient and accurate methods, such as non-polynomial finite element methods, for solving these problems. We will focus on the ultra weak variational formulation (UWVF) that uses physical basis functions. However, methods based on non-polynomial basis functions may have challenges with ill-conditioning at low frequencies.
In this talk we shall consider the UWVF for the 3D elastic wave problems. The UWVF was first developed for the Helmholtz equation and Maxwell's equations by Cessenat and Després. The UWVF is a volume based method that uses non-polynomial basis functions such as plane waves or Bessel basis functions. In the case of a plane wave basis the integrals in the variational formulation can be computed efficiently in closed form. In addition, the UWVF has shown to be a special case of the discontinuous Galerkin method (DGM). Therefore, it shares same properties with the DGM. For example, discontinuities on the element edges (in 2D) or the faces (in 3D) are allowed. This study is a continuation of the 2D elastic-UWVF. We show that the UWVF is a potentially attractive method for solving the 3D elastic wave problems. We shall show results using the UWVF with different wave numbers in a cubic domain. In addition, we shall discuss the accuracy and ill-conditioning issues in 3D. In future we shall apply the UWVF for more complex problems and fluid-solid problems in 3D. In addition, one of the main practical applications will be the simulation of the focused ultrasound surgery.
Department of Architecture, Tokyo University of Science, Japan
ABSTRACT
The cocktail party effect is known as auditory ability to distinguish particular sound signal from other sounds like noise signals or reflected signals. The cause of cocktail party effect is tried to solve from various fields and various point of view. The blind source separation is used one of the methods to solve the cocktail party effect. Authors proposed the methods for specification of the locations of source signals and the separation of the source signals using time-frequency information and time lags of source signals as a technique of blind source separation. However, in our method, the reflected signals can not be considered, so in the case of actual problem, it is somewhat not practical. In this paper, we propose the method which can be conducted the complete separation of source signal and the complete specification of location of source signal from the observation signal in case of one reflection problem of one source signal.
First of all, mathematical formulation is shown. The observation signals are transformed to Fourier domain, and the relationship between observation signals is introduced by dint of elimination of source signal. The relationship is represented by attenuation coefficients and time lag coefficients. We make use of the characteristics of time lag coefficients which is represented by the term of exponential in discrete Fourier domain. For instance, the term of exponential yields 1 or -1 when the Nyquest frequency substitute to the term of exponential, and 1, -1, i ,-i(i denote imaginary unit) when the half of Nyquest frequency substitute to the term of exponential. And the characteristics are used continuously. Then, by use of relationship between the real value of attenuation coefficients and the complex value of Fourier domain, the attenuation coefficients and time lag coefficients are completely specified. Then the location of source signal and source signal are specified.
Moreover, the numerical test is conducted to confirm our method. Then, the location of source signal and source signal are specified within the numerical error.
Acoustics and Vibration Group, School of Engineering and Information Technology, UNSW@ADFA, Canberra, Australia
ABSTRACT
Due to significantly reduced interior noise as a result of reduction of noise from internal combustion engine and tyre-road contact noise and the use of lightweight composite materials for the car body, disc brake squeal has become increasingly a concern to automotive industry because of the high costs in warranty related claims. While it is now almost standard practice to use the complex eigenvalue method in commercial finite element codes to predict unstable vibration modes, not all predicted unstable vibration modes will squeal and vice versa. There are very few attempts to calculate the acoustic radiation from predicted unstable vibration modes. Guidelines on how to predict brake squeal propensity with confidence are yet to be established. In this study, three numerical aspects important for the prediction of brake squeal propensity are examined: how to select an appropriate mesh; comparisons of methods available in ABAQUS 6.8.-4 for harmonic forced response analysis; and comparisons of boundary element methods (BEM) for acoustic radiation calculations in LMS VL Acoustics and ESI VA. In the mesh study, results indicate that the mesh has to be sufficiently fine to predict unstable modes that are mesh independent. While linear and quadratic tetrahedral elements offer the best options in meshing more realistic structures, only quadratic elements should be used for solutions to be mesh independent. Otherwise, linear hexahedral elements represent an alternative but are not as easy to apply to complex structures. In the forced response study, the modal, subspace and direct steady-state response analysis in ABAQUS are compared to each other and with the FRF synthesis in LMS/VL Acoustics. Results show that only the direct method can take into account of friction effects fully and is the most accurate method. In the numerical analysis with acoustic boundary elements, the following methods are compared in terms of performance and accuracy for a model of a sphere, a cat-eye radiator,a pad-on-disc model and a simplified brake system: plane wave approximation, the LMS`s direct & indirect BEM LMS`s indirect fast multipole BEM (FMBEM) and fast multipole BEM with Burton Miller (FMM) implemented in ESI/ VA. These results suggest that for a full brake system, the plane wave approximation or the FMM are suitable.
Acoustics & Vibration Unit, School of Engineering and Information Technology, UNSW@ADFA, Canberra
ABSTRACT
Since the early 1930`s brake squeal has been a problem for NVH departments and the high-pitched noise causes customers to log costly warranty complaints. Due to its friction-induced nature, material properties and operating conditions, the problem is non-linear and highly complex. In the past, research has been focussed on mode-coupling instability, predicted by the complex eigenvalue analysis (CEA). However, for unstable modes not detected by CEA, friction-induced energy fed-back by pad modes due to friction coefficient, pressure-variations and non-linear material properties has been shown by means of non-linear time series analysis and the acoustic boundary element method to cause friction-induced pad squeal or to amplify mode-coupling of brake components for a pad-on-plate system. It is suggested to treat pad mode instabilities as a stochastic process defined by a random 3-parameter-space: the mean changes in kinetic energy, frequency and acoustic power caused by changes in pressure or friction coefficient. It is shown, that for a pad-on-plate system and for a pad-on-disc simplified brake system, this stochastic approach enables the probability to be calculated for a specified increase in kinetic energy or a specified change in frequencies, thus allowing assessment of brake squeal propensity and strategies for controlling brake squeal.
(1) Faculty of Engineering, Oita University, Oita, Japan (2) Venture Business Laboratory, Oita University, Oita, Japan (3) Graduate School of Engineering, Oita University, Oita, Japan
ABSTRACT
With the rapid progress of computer technology, numerical simulations based on the wave equation such as FEM and BEM have come to be powerful tools for acoustical design process. The authors have been developing a system of large scale finite element sound field analysis in both time and frequency domains in order to analyze sound fields in rooms with complicated boundary conditions. One of the problems to use the numerical simulations for design process is how to model the complicated geometries of architectural spaces. Typically, architectural spaces have several uneven structures like window, door and light fixture and so on. Although it is possible to model geometry of the structures including small details such as a window frame, a simulation using FE model with detailed room geometry requires a large computational cost. From a practical point of view, therefore, the use of simplified FE model that does not affect acoustics of rooms is desired.
In this paper, a series of simulations using FE model with different approximation level of room geometry are conducted to reveal the influence of the use of FE models with different geometry representations on the simulated sound field of rooms. A small office with the volume of 55 m^3 is selected for the simulation and four FE model are created. The impulse responses and several room acoustical parameters such as T30, EDT and D50 obtained from each simulation are compared at frequencies of 125-1k Hz.
Department of Mechanical Engineering, Boston University, Boston, MA, USA
ABSTRACT
The papers on propagation in a fluid-saturated porous solid which Biot published in the Journal of the Acoustical Society in 1956 rank among the most highly cited papers in the history of acoustics. The Google Scholar web site (March 2010) shows 2662 citations for the first paper. These papers, along with a third in 1962, can be taken as what is called the Biot theory. This, plus modifications described in several papers by Stoll, the first with Bryan, published in 1970 and later, have come to be be known as the Biot-Stoll theory. The present paper argues that the apparent current whole-hearted acceptance of the Biot theory, and especially of the later modifications associated with Stoll, has been made with insufficient critical thought. The derivations, while appealing, are heuristic, and there is little reason to expect broad applicability. The first of Biot's 1956 papers, the low-frequency-range paper, contains 8 parameters, three density parameters, four elastic modulus parameters, and one frictional parameter (b) analogous to a dashpot constant. The present paper contends that the equations as derived with eight adjustable parameters is sufficient, and perhaps more than sufficient, to predict all three types of disturbances which can be rationally expected for a porous medium in the limit of very low frequencies: a propagating longitudinal wave, a propagating shear wave, and a disturbance governed by the diffusion equation (Darcy mode). Biot, however, did an analysis in this paper that was inconsistent with the low-frequency limit, threw away the terms involving b, and in so doing came up with a prediction of a second propagating longitudinal wave rather than a disturbance governed by a diffusion wave. The present paper questions whether such a propagating slow wave with small attenuation would ever exist,in any frequency range,in any realistic porous medium. The experimental confirmation reported by Plona in 1980 is argued to be contrived in that the artificial porous medium was perfectly periodic. Also, while the low-frequency Biot model predicts a plausible quadratic frequency dependent attenuation for both the compressional and shear waves, it is argued that it is unlikely that the 8 parameters can be simultaneously adjusted to yield realistic predictions of the attenuation constants as well as the other basic parameters for the three types of disturbances. A principal component of Stoll's modifications, in which the elastic moduli are replaced by complex elastic constants, is shown to violate causality considerations.
University of Bradford, Bradford, England, UK
ABSTRACT
The behaviour of the acoustic intensity field near an open end of a round pipe is complex and poorly understood. In this paper we propose an efficient method to study the acoustic intensity in a pipe with the Neumann boundary conditions on its walls. We assume that the sound field in this pipe is excited by a plane wave that is incident from the far field on its open end. We also assume that this end of the pipe is flanged in a rigid baffle. We present the total sound field in the pipe as a superposition of propagating and evanescent modes. We use the Huygens-Fresnel principle to formulate the radiation conditions at the open end of the pipe. We adopt the orthogonality condition for normal modes to derive the equation for the modal coefficients in the reflected sound field. We compute accurately the singular integrals which appear in the equation for the modal coefficients using the Telles numerical integration scheme. Finally, we validate the proposed model against experimental data obtained for a 150mm PVC pipe.
Technische Universitaet Dresden, Germany
ABSTRACT
Nonlinear interactions between a free jet and an acoustic field play a major role in the sound production of musical instruments like recorders, flutes and organ pipes, but also in different technological applications. Solving the compressible and unsteady Navier-Stokes equations allows to resolve both the sound production mechanism and also the sound propagation in the nearly linear resonator and in the far field. Thereby the different length scales between the vortex shedding at the labium and the acoustic wave lengths and the resulting computational effort avoid an efficient optimization of the resonator. To overcome these limitations we separate sound production and propagation. Here the resonator is assumed linear and described in the frequency domain by the Helmholtz equation. The sound production mechanism is modeled by acoustic sources. These sources result from unsteady, compressible RANS calculations. According to the vortex theory by Howe the sound production is modeled by acoustic dipole sources that result from interactions between the jet and the acoustic field. This approach allows the efficient calculation of different resonator geometries, without the necessity to solve the complete sound production mechanism again. We present results based on this approach as well as experimental validation data.
Institute of Technical Acoustics, RWTH Aachen University, Germany
ABSTRACT
Over the last decades Virtual Reality (VR) technology has emerged to be a powerful tool for a wide variety of applications such as rapid prototyping, evaluation, therapy, or training tasks. For high quality auralizations (in analogy to visualization) of virtual environments, methods of Geometrical Acoustics (GA) are mostly applied to simulate the propagation of sound inside enclosures. By adapting acceleration algorithms such as BSP- and Octrees, current implementations can manage the computational load of moving sound sources around a moving receiver in real-time -- even for complex scenarios. However, insertion, modification and extraction of geometrical objects are basic operations in many real-world experiences, but hierarchical spatial data structures do not support them efficiently. For this purpose the concept of Spatial Hashing was introduced, which is usually applied to collision detection tests of deformable objects in Computer Graphics. This contribution describes the design, implementation and integration of a dynamic object controller in the real-time room acoustics simulation software RAVEN. By adapting the concept of Spatial Hashing to the simulation algorithms, RAVEN is able to handle geometry modifications in real-time. The performance of the newly implemented data handling- and simulation routines is briefly discussed and compared to that of Brute Force and BSP-based algorithms.
Department of Physics, Naval Postgraduate School, USA
ABSTRACT
In this paper, a review of several features of the acoustic vector field will be presented. The theoretical foundation for the acoustic energy flow will be used to describe the concepts of the active and reactive components of the complex acoustic intensity field. Recently established phenomenology associated with the flow of acoustic intensity near planar boundaries via the visualization of streamlines will also be discussed. Features of the complex field scattered from simple objects will be presented, along with phenomenology found in multipath environments such as channels and waveguides. Methods for validating vector field extensions of existing numerical models in the field of underwater acoustics are defined, and some specific examples of the properties of the complex field in ocean environments is provided. Methods for extracting information about the seafloor (e.g., geoacoustic inversion) are also described. Finally, a brief overview of common beamforming techniques is presented.
(1) Department of Electronics and Telecommunications, Norwegian University of Science and Technology, Trondheim, Norway (2) Department of Mathematics, Norwegian University of Science and Technology, Trondheim, Norway
ABSTRACT
A common type of integral to solve numerically in computational room acoustics and other applications is the diffraction integral. Various formulations are encountered but they are usually of the Fourier-type, which means an oscillating integrand which becomes increasingly expensive to compute for increasing frequencies. Classical asympotic solution methods, such as the stationary-phase method, might have limited accuracy across the relevant frequency range. The method of steepest descent is known to offer efficient evaluation of such integrals but for most diffraction integrals, the optimum deformed integration path might be impossible to find analytically. A recent numerical version of the method of steepest descent finds an approximate path numerically and this paper will show the application of this method to one specific edge diffraction integral which is valid for infinite and finite edges. The required integration path sections are found numerically via applying a Taylor expansion of the integrand oscillator function, involving up to the fourth-order derivative for this example, and a subsequent series inversion. Once the path is avaliable, two efficient quadrature methods are used for the exponentially decaying integrands, Gauss-Laguerre and Gauss-Hermite. The method is compared with brute-force numerical integration using Gauss-Kronrod quadrature in the Matlab implementation. Numercial examples demonstrate that the new method has a computation time which is independent of frequency and of edge length, whereas that of the brute-force method depends heavily on frequency as well as edge length. It is shown that the accuracy of the new method decreases for low frequencies and for geometrical cases where the receiver point is near a zone boundary. Methods to tackle these limitations are outlined.
Physics Department, University of Auckland, New Zealand
ABSTRACT
Wavefront modelling arises from a solution of the wave equation which expresses the acoustic field as a sum of a series of phase integrals. Each phase integral is directly linked to ray paths with a given sequence of reflections or turning points. Asymptotic evaluation of the phase integrals gives the amplitude, phase and arrival time of pulses from an acoustic source which can be used to construct the waveform at a receiver. The results apply everywhere including near caustics and acoustic shadow zones and show that ray theory can be used at low frequencies. Recent developments in wavefront modelling including reflection from moving surface waves will be discussed.
(1) Kyoto University Pioneering Research Unit, Japan (2) Dept. of Architecture and Architectural Eng., Graduate School of Eng., Kyoto University, Japan
ABSTRACT
The finite-difference time-domain method considering longitudinal and shear waves and two types of damping terms has been proposed as a prediction method for structure-borne sound. In the method, both solids and fluids are assumed to be governed by a unique set of motion equations and viscoelastic constitutive equations using averaged material parameters. Herein the formulation of the method for inhomogeneous anisotropic media is presented and some numerical examples are shown. The comparison between predicted and measured data of floor impact noise in a two-story concrete building is first introduced from the viewpoints of energy decay and frequency characteristics. Next, to investigate the accuracy of the prediction method, the numerical results for a simple vibroacoustic system of a circular plate clamped in a duct are compared with analytical ones obtained by the thin-plate theory. In the comparison, discrepancies in eigenfrequency can be observed because the considered plate is rather thick. However the predicted eigenfrequencies in vacuo well correspond with those derived from the thick-plate theory. Last, propagation of waves in a wooden block and the radiated sound are calculated and the numerical results are compared with the measured ones. Although the material parameters need to be identified by use of the measured data, the calculated results can be in good agreement with the measured ones.
University of Salford, Salford, UK
ABSTRACT
Double-porosity materials have been proved to provide considerable higher sound absorption when compared to single-porosity materials. In this paper, the acoustic properties of double-porosity fibrous materials are studied analytically. Two models that consider the multiscale nature of the materials along with the slip and temperature-jump boundary effects at the microscopic level are introduced. The first model corresponds to a regular array of microporous fibres. An array of solid regularly-arranged microfibre clusters is considered in the second model. A hybrid analytical-numerical approach is used for the case when the solid microfibres are spatially-randomly distributed within the clusters. A detailed parametric analysis and optimal parameters for maximizing sound absorption calculated using the differential evolution algorithm are also presented. It is concluded that double-porosity fibrous materials present both reduced weight and remarkable sound absorption enhancement with respect to its single-porosity counterpart.
School of Mechanical Engineering, Pusan National University, Pusan, Korea
ABSTRACT
This paper deals with the broadband noise due to the interaction between convected turbulent gusts and a rectilinear Cascade of Flat Plates bounded by two parallel walls. We derived the formula for the acoustic power spectrum due to turbulence-cascade interaction. This three-dimensional theory is deduced based on the two-dimensional theory of Cheong et al. The predictions using this three-dimensional model are compared with those using the previous two-dimensional model. These comparisons make it possible to make clear the effects of incident turbulent gust components in the span-wise direction on the inflow broadband noise, which is essential for understanding the broadband noise reduction by using a rectilinear cascade of leaned or swept flat plates.
(1) Faculty of Engineering, Kanagawa University, Yokohama, Japan (2) Faculty of Engineering, Niigata University, Japan (3) Graduate School of Frontier Sciences, The University of Tokyo, Japan (4) Cybernet Systems Co., Ltd., Japan
ABSTRACT
The fast multipole boundary element method (FMBEM), which is an efficient BEM with the use of the fast multipole method (FMM), is known to have instability at low frequencies when the well-known diagonal form for translation of multipole/local coefficients is employed. To overcome this problem, we have already developed a low-frequency FMBEM (LF-FMBEM), which is based on the original multipole expansion theory with translation techniques proposed by Gumerov and Duraiswami for avoiding the low-frequency instability. In the present paper, the degenerate boundary formulation, which is often referred to as the dual BEM, is discussed in the framework of the LF-FMBEM. The degenerate boundary formulation enables not only analyzing degenerate boundary models which have unknowns on both sides of the boundaries, but also avoiding well-known fictitious eigenfrequency difficulties for exterior problems. A concrete computational procedure of the LF-FMBEM based on the degenerate boundary formulation is described in details, which results in O(N) operation counts and memory requirements. The computational accuracy and efficiency are validated through numerical experiments. Moreover, practically appropriate numerical settings on truncation numbers for multipole/local expansion coefficients and the lowest level for the hierarchical cell structure used in the FMM are investigated. Numerical results and computational efficiency of the LF-FMBEM are compared with those of the high-frequency FMBEM (HF-FMBEM), in which the diagonal form is employed.
(1) Department of Building, School of Design and Environment, National University of Singapore, Singapore (2) Housing and Development Board, HDB Hub, Singapore
ABSTRACT
Researchers in past few decades investigated on the negative evaluation of the noise environment (i.e. annoyance). Despite of an extensive and rich literature on human noise annoyance experiences, there has been very limited research effort in the investigation of acoustic comfort among residential dwellers. With the technological advancement in many aspects of our living environment in recent years, quality of life issues become prime concern. Acoustic comfort is such a key aspiration of our living environment. Acoustic comforts among high-rise dwellers, especially in the dense urban residential environment in the tropics have not been investigated yet. Since research on acoustic comfort is nascent, there is a quest for a comprehensive evaluation framework and an acoustic comfort model, developed on sound theoretical basis. The current study endeavors to expand the conceptualization of the acoustic comfort among high-rise dwellers in the tropics. A novel acoustic comfort model based on the theory of noise annoyance by Stallen (1999) is proposed in this paper.To evaluate acoustic comfort among the high-rise dwellers in the tropics, a comprehensive noise survey, using stratified sampling technique (based on major environmental noise sources), among 604 households was conceived. Evaluation of acoustic comfort in the high-rise built environment was investigated with respect to major environmental and neighbour noise sources. Perceived acoustic comfort responses were correlated to several acoustical and non-acoustical factors related the indoor noise exposure due to major environmental noise sources. Besides, subjective acoustic comfort responses were also correlated to the perceived neighbour noise and associated disturbance. Factor analysis and multiple regressions analysis of the data from the noise survey resulted in the development of an acoustic comfort model which demonstrates that acoustic comfort is dependent on the perception of noisiness and associated perceived disturbance by major environmental noise sources in the high-rise residential environment in the tropics. Structural Equation Modeling (SEM) technique was then used to investigate the relationships between variables that influence acoustic comfort.
Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing, China
ABSTRACT
Based on wave propagating theory of multi-layered medium and the optimizing algorithm, the complex elastic modulus of viscoelastic materials are optimized with different physical conditions to improve material absorption performance. Isoclines of absorption coefficient on complex elastic modulus of absorption materials are presented with certain boundary conditions. Assuming the absorption coefficient is larger than 0.8, the scope of elastic modulus and loss factor of the viscoelastic materials with different boundary conditions are given and discussed. The results show that the sound absorption performance can be improved effectively by adjusting complex elastic modulus of viscoelastic materials. The scope of elastic modulus is found to be very sensitive to the boundary conditions while the absorption coefficient is larger than 0.8. The difficulty of adjusting complex elastic modulus can be reduced with cer-tain steel backing, but the absorption performance of viscoelastic materials become worse with water backing.
The University of Sheffield, UK Northumbria University, UK
ABSTRACT
Maximising the natural ventilation of a building can be beneficial in terms of comfort and reduced reliance on air conditioning. In noisy urban areas this can conflict with the need to reduce the ingress of external noise. In this study the effect of building exposure to noise on natural ventilation potential is investigated. The occurrence of window openings on a building façade was adjusted according to road traffic noise levels. Road traffic noise levels at the building façade were modelled using a noise map of Manchester in CadnaA. Window openings were adjusted in representative DesignBuilder/EnergyPlus building energy models with calculated natural ventilation and opening schedulings. This enabled acoustic considerations to be quantified in terms of building ventilation and chiller energy use at the whole building level over a summer time period.
Institute of Sound and Vibration Research, University of Southampton, UK
ABSTRACT
A major source of aircraft noise is the broadband noise that is generated by the turbulent wakes of the fan interacting with the stator guide vanes. This can be modelled using analytical techniques if the geometry and flow properties are simplified, or by using large-scale computational fluid dynamics simulations which are currently too expensive for engineering design purposes. This paper describes the use of stochastic computational techniques to represent the turbulent flow and to reduce the computational costs. It is used to predict noise generated by an airfoil interacting with a turbulent flow.
With this stochastic approach a model for sound propagation in non-uniform flows (the linearized Euler equations) is used together with a stochastic description of the turbulence. This uses a set of point vortices with random strengths to represent the turbulent velocity field impinging on the airfoil. These vortices are convected with the mean flow and the energy spectrum and correlation function of the turbulence are controlled by the choice of velocity field induced by a vortex. The loss of correlation in time of the vortex strengths is modelled using first- or second-order Langevin equations. In this paper, the computational method will be described and several validation test cases will be presented using comparisons with an existing analytical solution for a flat plate interacting with homogeneous isotropic turbulence. Then the influence of evolving turbulence will be studied. Finally the stochastic method will be used to model the effect of non-stationary turbulence properties on interaction noise.
Acoustic Group, INTEC, Ghent University, Gent, Belgium
ABSTRACT
Scattering of sound by trees is either wanted or unwanted, depending of the application. Behind noise barriers, trees have a positive effect on the wind field, but could decrease barrier performance in absence of wind. In a street canyon, the presence of trees increases the diffusivity of the sound field. However, little is known about the inter-species differences with respect to scattering. In this paper, an in-situ and easy-to-deploy measurement methodology is presented to estimate the amount of acoustic scattering by a single tree, by using a pulse generator (e.g. alarm pistol) and a single microphone. By performing time-domain analysis, the direct sound path and scattered waves can be separated. Furthermore, early and late scattering by the tree crown can be distinguished. Example measurements are presented, and the degree of scattering is linked to geometrical crown properties.
(1) The HEARing CRC, Melbourne, Victoria, Australia (2) National Acoustic Laboratories, Sydney, NSW, Australia
ABSTRACT
Electronic communications equipment such as telephones, two-way radios, computers, amplified hearing protectors and hearing aids can reproduce noise with a loudness in excess of the speech they reproduce. Noise that is louder than conversational speech is typically perceived as being less comfortable and in some cases can cause injury to the listener, such as producing an acoustic shock injury or a hearing loss. The conventional approach to controlling loud noise is to use a sound level limiter however conventional sound level limiting suffers from several shortcomings. Firstly, there is always a compromise when setting a limiting level: if it is set to a high level then the listener can be subjected to loud sound; but if it is set to a low level the speech will be limited, which will reduce its quality and intelligibility. Secondly, conventional methods of sound level limiting do not adapt to the sound to which the listener is acclimatised. A new approach to sound level limiting is to use the loudness of the speech that the listener is hearing as a reference and reduce the loudness of non-speech sounds with respect to this reference. This novel method is called Speech Referenced Limiting (SRL). The limiting level is adaptive and automatically set by the loudness of the speech to which the listener is acclimatised. When done on a frequency specific basis an umpires whistle is reduced to the maximum level of the treble of a recent conversation and the rumble of a truck to the maximum level of its bass. This is achieved by estimating the maximum loudness of speech at different frequencies to produce a speech reference and limiting sound that exceeds this reference. A digital signal processing algorithm has been developed to perform the method. Details of the SRL scheme and experimental data on the effects of SRL on speech and noise are presented.
Ritsumeikan University, Kyoto, Japan
ABSTRACT
In recent years, the mosquito signal has been increasingly used as a method of dispersing noisy groups of young people from public spaces. The mosquito signal is a high-frequency signal around 17 kHz that is audible to young people but not to older adults. A recent study on an acoustic system using the mosquito signal indicated that it was able to disperse young people gathered at a park in Japan in around three minutes, which clearly shows that the mosquito signal is both effective and useful. In our study, we have attempted to design a highly uncomfortable mosquito signal for the dispersal of human noise sources. We focused on three mosquito signals: sine signals, environmental signals, and signals based on the auditory sense. First, we designed the sine signals. We used a highly pure sine wave because conventional reports have suggested that highly uncomfortable signals tend to hold a sharper attenuation from the main peak in frequency. Next, we designed the environmental signals. We used a combination of three different signals: a motor noise to simulate a cleaner, a crying noise to simulate an infant, and a scrub noise to simulate foamed styrol. We then convoluted a high-pass filter to these signals. Finally, we designed the signals based on the auditory sense. We used the same three signals as in the previous design but shifted from a lower frequency (1-5 kHz) to a higher frequency band, because humans tend to mainly hear the signal transmitted on 1-5 kHz. We used the mean opinion score (MOS) to conduct a subjective evaluation. Results demonstrated that the new mosquito signals were more uncomfortable than the conventional ones, and that the pure sine mosquito signal at 15 kHz was the most uncomfortable of the proposed signals. In future work, we intend to design an even more uncomfortable mosquito signal based on complex sine waves. We will also focus on controlling the signal's output area to reduce the negative effects on local citizens and animals.
University of Wuppertal, Wuppertal, Germany
ABSTRACT
A well known method to build a feed forward active noise control (ANC) system for the damping in a certain area is based on the Kirchhoff-Helmholtz integral. The setup uses a combination of pressure and velocity microphones to measure the primary field (noise) and to reproduce a phase turned secondary field resulting in the attenuation of the noise. This paper presents a method to find a control input for an adaptive algorithm to enhance damping effects. This is achieved by the decomposition of the incident and the reflected waves on the system's borderline.
Department of Architecture, National Cheng Kung University, Tainan City, Taiwan
ABSTRACT
The oblique micro-perforated panel (OMPP) is distinct from micro-perforated panel (MPP) in terms of structure, such as non-circular form appearance and oblique holes. We employ multiple linear regressions (MLR) to estimate the sound-absorption coefficient of OMPP with various setting of structure factors. The analytical results indicate that the MLR exhibits satisfactory reliability of a correlation between estimation and measured sound absorption coefficients.
School of Mechanical Engineering, The University of Adelaide, Adelaide, SA 5005 Australia
ABSTRACT
A common problem in local active noise control is that the zone of quiet centered at the physical microphone is too small to extend to the desired location of attenuation, such as an observer's ear. The physical microphone must therefore be placed at the desired location of attenuation, which is often inconvenient. Virtual microphones overcome this by shifting the zone of quiet away from the physical microphone to a desired location of attenuation, referred to as the virtual location. In an effort to extend the zone of quiet generated at the virtual location, a virtual acoustic energy density method is developed in this paper for use in a three dimensional sound field. This virtual energy density method uses a modified version of the remote microphone technique to estimate the total acoustic energy density at a virtual location. Experimental results of active noise control at a virtual acoustic energy density sensor and a virtual microphone in a three-dimensional sound field are presented for comparison. Minimising the total virtual acoustic energy density with the active noise control system is shown to create a spatially extended zone of quiet at a fixed virtual location compared to virtual pressure control.
The Hong Kong Polytechnic University, Hung Hom, Hong Kong.
ABSTRACT
A model study of a thick barrier of finite length on a hard reflecting ground is presented. One of the vertical edges is isolated so diffraction only occurs from the top edge and one of the side edges. The reference case is when the side edge angle is 90 (rectangular edge) which is compared to several cases with increasing angle from 90 to 180 degrees (circular edge). In this paper the reference case is presented and compared with the 180 degree case. The measurements were conducted in a semi-anechoic chamber using a line source for the frequency range 250 Hz to 20 kHz. The barrier thickness was 150 mm which corresponds to a characteristic frequency of about 2 kHz. The measurement results show that the insertion loss is basically broadband. It increases with frequency and the rate of increase is relatively rapid at 2 kHz which is the characteristic frequency of the edge. However, there is a dip in the insertion loss at about 8 kHz for the circular edge.
(1) The HEARing CRC, Melbourne, Victoria, Australia (2) National Acoustic Laboratories, Sydney, NSW, Australia
ABSTRACT
In recent years active noise reduction (ANR) technology has become more commonly available in personal hearing devices such as earmuffs, headphones, headsets and earphones. In the absence of ANR technology these personal devices are less effective at reducing low frequency environmental noise compared to reducing high frequency environmental noise. ANR technology is a well suited addition to these devices as it is best at reducing low frequency environmental noise and hence improves the performance of these devices in an area in which they are normally poor performers. An experiment was conducted which involved performing objective tests on 13 devices that incorporated ANR technology (earmuffs, headphones, headsets and in-ear earphones) using an acoustic test fixture (ATF). The devices were divided into four groups based mainly on structure. The characteristics of the passive and active performance of these devices are presented. Each device was examined on its attenuation of broadband noise, overload response, internally generated noise, impulse noise response, and stability to movement. The results show a large range of responses between devices in terms of attenuation and overload, and highlight distinctive differences between the device groupings. The maximum active noise attenuation was 19 dB on average (range 16-25 dB) for circumaural and in-ear devices; and 8 dB on average (range 5-11 dB) for supraaural devices. The industrial devices performed well in high noise conditions, maintaining their active noise reduction up to at least 125 dB SPL. The noise generated by the active electronics was 25 dBA on average (range: 19-32 dBA). Although the ANR technology in all the devices was found to reduce environmental noise in the lower frequency region it was also found to increase it in some other frequency regions. The addition of ANR technology did not offer any further reduction in impulse noise level and in many cases resulted in poorer impulse noise reduction.
(1) Seikei University, Tokyo, Japan (2) Kobe Steel Ltd., Kobe, Japan
ABSTRACT
As a new sound absorption material, micro-perforated aluminum thin plate has been developed, which is strong for water, oil, or heat. But thin plate is easily vibrated by sound pressure. And the vibration affects the performance of sound absorption. We experimented to make clear the relation between the coefficient of sound absorption and the vibration of micro-perforated plate. Natural frequencies and vibration modes of micro-perforated thin aluminum plate were observed by using the scanning laser Dopper vibrometer, and the sound absorption coefficient of that plate was measured by two microphone method. We found that the sound absorption performance was affected by natural vibration modes and that there was the special mode to decrease sound absorption performance remarkably when the phase of particle velocity of air and vibration velocity of the plate became same. We also found that damping is effective to improve the local depression of the sound absorption coefficient.
(1) Department of Building Services Engineering, The Hong Kong Polytechnic University, P.R.China (2) Department of Mechanical Engineering, The Hong Kong Polytechnic University, P.R.China
ABSTRACT
The sound transmission losses across expansion chambers with the upper and lower chambers offset (or staggered) inside a duct were studied experimentally in the present investigation. The sound transmission losses were measured using the four microphone method (one pair upstream and one pair downstream of the chambers). Pressure transducers were used to record the pressure fluctuations within the chambers. An anechoic termination was included in order to minimize the acoustic reflection at the exit of the test rig. Compared to the conventional expansion chamber, the staggered chamber setting results in a higher sound transmission loss at higher frequencies (below the first duct cut-off) and such increase in the sound transmission loss increases with the degree of chamber offset. Such rise in the sound transmission loss is abrupt showing that there exists a critical frequency at which the change is excited. The increase in the sound transmission loss is also relatively broadband, though it drops as the forcing frequency is further increased. The critical frequency appears weakly dependent on the degree of offset. On the low frequency side, the increase in the chamber offset does not result in significant change in the sound transmission loss, except that a larger fluctuation of the latter can be observed at increased offset.
Ritsumeikan University, Kusatsu, Kyoto, Japan
ABSTRACT
Recently, the use of sound field reproduction systems for a high-realistic sensation has been increasing. A binaural system and a transaural system have been proposed as potential sound field reproduction techniques, but both systems exhibit flaws. The binaural system gives some listeners an oppressive feeling because headphones must be used, and the transaural system, although it bypasses the oppression problem with its use of multiple loudspeakers instead of headphones, suffers a distortion of realistic sensation caused by spatial crosstalk. We therefore propose a "semi-transaural" system to address these flaws. Our system can reduce the spatial crosstalk because the loudspeakers for the left ear are located near the left ear and those for the right ear are located near the right ear. It can effectively reduce spatial crosstalk while simultaneously avoiding an oppressive feeling because the loudspeakers are not located on the ears. However, this system is more sensitive to environmental noises than the binaural system, and the realistic sensation is distorted because environmental noises are easily picked up by the listener. To overcome this problem, we first tried to suppress the environmental noises and next tried to create a stronger high-realistic sensation system. The N-1-1 ANC system, which has already been proposed as a noise suppression method, can effectively suppress non-directional noises by using reference microphones, and we have thus applied this system to our proposed semi-transaural system. Our system is composed of three reference microphones, one canceling loudspeaker, and one error microphone. The reference microphones capture non-directional noises in the room, the left/right loudspeakers for the semi-transaural system are used as the canceling loudspeaker, and the error microphone is located at the listening point. First, the proposed system captures non-directional noises with the reference microphones. Next, it calculates the canceling signal for the noises based on captured signals and emits it with the canceling loudspeaker. Finally, it suppresses the noises in the point of the error microphone. A secondary pass, which shows the transfer function between the canceling loudspeaker and the error microphone, can be simply estimated due to the short distance between them. We conducted evaluation experiments with a white noise, a pink noise, and a server noise and evaluated the suppression level for these noises in the point of the error microphone. Results showed that on average the noises were suppressed at 3.79 dB from the range of 50-1000 Hz.
Ritsumeikan University, Kusatsu, Kyoto, Japan
ABSTRACT
The methods for noise suppression based on noise-power reduction are generally employed to overcome some noise problems in all over the world. It is very important to reduce the louder noises such as traffic noises, construction noises and so on. However, we may often perceive as unpleasant feeling in quiet surroundings even if these are small-power noises. On the other hand, they also report that a general case which quiet surroundings without a certain power of noise causes an unpleasant and an anxious feeling. Thus, our research focuses on reducing an unpleasant feeling without noise-power reduction by adding "artificial source" based on auditory masking to noises. We especially try to convert higher-frequency noise to a comfortable sound by adding the artificial sources. In this paper, we first generate the higher-frequency noises based on the wind noises or air-conditioning noises and then automatically design the artificial sources for them based on auditory masking. The higher-frequency noises consist of three kinds of frequency band and seven kinds of center frequency from 2000-6000 Hz. The artificial sources are also designed by calculating the frequency band and the power level which can mask the high-frequency noises according to the theory based on auditory masking. And we finally realize a comfortable sound by adding artificial sources to the noise. We carried out two evaluation experiments with the higher-frequency noises added the artificial sources. One is the relative evaluation experiment to evaluate whether unpleasant feeling is reduced. The subjects should compare with the higher-frequency noises before / after the artificial sources addition. The other is the absolute evaluation experiment to evaluate the absolute value for the unpleasantness of each sound source. The subjects should also evaluate each sound source with the rating scale of five grades. As a result of two evaluation experiments, we confirmed that the more narrow higher-frequency noise band, the more difference of unpleasant feeling between after / before the artificial source addition. Therefore, the more narrow higher-frequency noise band, the more we could reduce unpleasant feeling of the higher-frequency noises without reducing the sound pressure level regardless of center frequency of the higher-frequency noises. In future work, we should try to achieve more comfortable sound based on the auditory sense analysis in more detail.
(1) Dept. of Mechanical Engineering Graduate School, Hanyang University, Korea (2) School of Mechanical Engineering, Hanyang University, Korea
ABSTRACT
The LMS(Least-Mean-Square) type algorithm is the most widely used algorithm for active noise control. The LMS algorithm can easily obtain the complex transfer function in real-time, so modified LMS algorithms that can improve performance have been developed. Especially, the Filtered-X LMS (FXLMS) algorithm has been applied to Active Noise Control (ANC) and Active Vibration Control (AVC).In this paper, FXLMS algorithm is applied to the experiment on the ANC of the 3D enclosure system.
Peutz & Associates, Paris, France
ABSTRACT
Outdoor spaces, or very large venues for that matter, often offer a seducing way of hosting large events without undue complications regarding security and fittings. However, they usually do not provide the audience and the performers with as high a level of comfort than enclosed venues. More to the point, community noise control of such facilities can really be tricky.
What can be expected of such facilities? This paper intends to submit a few hints, looking at recent projects and developments.
(1) SEMAM - Secretaria Municipal de Meio Ambiente e Controle Urbano, ECPS - Equipe de Controle da Poluição Sonora, Fortaleza, Brasil (2) CAPS, Instituto Superior Técnico, TULisbon, Lisboa, Portugal
ABSTRACT
The Fortaleza noise mapping project was set up for the spatial representation of environmental noise indicators to obtain an essential tool to analyze and define strategies for Noise Pollution control in Fortaleza, Brazil. This is the first large scale noise map drawn for a large city in Brazil. Noise emissions from the most important sources contributing to the sound environment of the city, namely road traffic, railway noise, aircraft noise, industrial noise, and noise from entertainment areas were included. The method followed a hybrid approach, essentially calculation complemented with experimental measurements for validation and calibration. The large scale noise assessment allowed detailed studies of the noise impact of the Fortaleza International Airport, located well within the urban city area, the impact of the passage of the underground access light rail tunnel on the local soundscape and the impact of the Ceará Musical Event, which, though being part of the city cultural programme, takes place in central and seaside areas close to a public hospital. These studies will be presented and discussed, in the context of a geographical area where the fair climate allows long hours spent outdoors.
USP-University of São Paulo, Brazil
ABSTRACT
Growing cities face environmental noise increase in some areas due to the expansion of transportation infrastructure and concentration of noisy activities. Authorities need guidance, based on research, to balance development needs with the capacity of the urban environment to accept resultant noise effects. Prevention needs investment and both, government and building constructors must share expenses in a reasonable base. Reason means that sources can be controlled individually or in a certain assembling, but not in the large multiplicity of an urban area. Consequently average noise levels can increase to some values that must be accepted by building constructors as an environmental parameter to be considered in their projects. The government of the city of São Paulo, asked an evaluation of the maximum capacity of normal building façades to isolate external noise. IPT-Institute for Technological Research performed several laboratory and field measurements of the Weighted Sound Reduction Index of windows with simple monolithic 3mm glasses, easily found in commerce. The best result were Rw=31 dB. For an average protection seeking an acoustic comfort of 40 dB(A) in sensible rooms, even on positions close to the window, the external noise level shouldn't exceed 71 dB(A). The value was chosen as a reference limit to noise impact over façades in the city in consequence of any government intervention. This article reports details of the research, possibly useful for other cities with similar environmental profiles.
Vie En.Ro.Se. Ingegneria - Via Stradivari, Florence, Italy
ABSTRACT
The H.U.S.H. (Harmonization of Urban noise reduction Strategies for Homogeneous action plans) project moves from the evidence that harmonization of noise action planning methods is needed, not only in Italy but also in all the European countries where a former Legislation about noise planning was present at the moment of END Directive adoption. The general objective is harmonizing national noise management standards with European Directive 49/2002 to obtain homogeneous noise Action Plans, contributing to the more general need of transposing, implementing and enforcing a common or harmonized environmental legislation among EU countries. Specific Objectives of the project are: 1. to point out unsolved conflicts among current standards at Regional, National and European level, and to define common methods for designing strategic and specific solutions; 2. to define a new development system (procedures and database) for action planning by testing it in a pilot case; 3. to design guidelines in order to build a system for action plan applications, to support Regional, National and European Law reviews.
In this paper the results coming out from action 5 of HUSH project will be described. This specific action focus to collect a database of solution cases for the reduction of noise in urban areas. Such collection will provide a significant contribution to further development of the Action Plan work platform. Collected data have been analysed and compared with the state of art. Starting from both data analysis and collected experiences, problems encountered during the acoustic design phase will be catalogued and solved.
Department of Mechanics and Industrial Technology, University of Florence, Florence, Italy
ABSTRACT
The H.U.S.H. (Harmonization of Urban noise reduction Strategies for Homogeneous action plans) project moves from the evidence that harmonization of noise action planning methods is needed, not only in Italy but also in all the European countries where a former Legislation about noise planning was existing at the moment of Environmental Noise Directive (END) 49/2002/EC adoption. The general aim is harmonizing national noise management standards with END for obtaining homogeneous noise Action Plans, in order to give contribution to the more general need of transposing, implementing and enforcing a common or harmonized environmental legislation among EU countries. Specific aims of the project are the following ones: a) to point out unsolved conflicts among current standards at Regional, National and European level; b) to define common methods for designing strategic and specific solutions; c) to define a new development system (procedures and database) for action planning by testing it in a pilot case; d) to design guidelines in order to build a system for action plan applications supporting Regional, National and European Law reviews. In this paper the results coming out from a specific action of HUSH project carried on by the University of Florence are described. This specific action focuses to build up the geographical data platform for city action planning. To achieve this aim, a few of city Action Plan data platforms - available in Italy and in European countries - were analyzed and compared referring to address the requirements set out by National, Regional and European regulations.
(1) VIPAC Engineers & Scientists Ltd., Sydney, Australia (2) Planifica Urbanismo y Gestión S.L., Castelló, Spain (3) VIPAC Engineers & Scientists Ltd., Adelaide, Australia
ABSTRACT
A number of situations may occur where multiple industrial noise sources combine to produce troublesome acoustic environments. In some cases, industrial activity develops adjacent to residential areas creating the need for control of noise propagation to its surroundings. In addition, the same industrial site could have a noise issue within the facility that may be of interest for health and safety of personnel. In the former scenario, ISO 8297 "Determination of acoustic power levels of multisource industrial plants for evaluation of sound pressure levels in the environment" has been used to predict the far field noise generated by industrial plants. This approach has proven to be useful where the required criteria are set in terms of acoustic power levels and only one single result value is expected for the whole plant. However, in the event that mitigation is required to control noise at residential receivers or for health and safety reasons, a more detailed method is needed to enable source identification and ranking of the relevant noise sources. This paper presents an alternative method that allows obtaining the acoustic power levels of the individual noise sources using inverse theory applied to noise modelling. This process is achieved by means of measurements of sound pressure levels around the noise sources and noise propagation modelling. Even though this method is well known, its application to real cases relies heavily on a combination of the quality of the measured data and the physical conditions of the problem. Thus, the numerical process usually involves the solution of ill-conditioned matrices that require regularisation in order to achieve stable results. This paper presents a practical example of the application of both methods in a real scenario highlighting the advantages and disadvantages of the two.
GHD Pty. Ltd., Australia
ABSTRACT
Several desalination plants have been developed in Australia in the last few years. A thorough noise and vibration assessment has been undertaken for the reference design of the largest desalination plant in Australia, which is currently being constructed near Wonthaggi, Victoria. The plant is to deliver 150 billion litres of water a year by 2011, with capability to expand to 200 billion litres a year in the future. Operation of the plant also necessitates the construction of an approximately 80 kilometre pipeline and 80 kilometre power supply line. This development is an opportunity to review the environmental noise constraints associated with desalination plants and, in this case, major infrastructures in Australia. The aim of this paper is to discuss the path to approval of a large-scale industrial noise assessment, from the noise monitoring regime to the iterative modelling process and the identification of noise control measures to meet the project noise targets. Victorian legislation currently in force to control environmental noise impacts is also discussed, in particular the possibility of applying different guidelines and/or policies at different periods of the day. In this case, noise modelling was undertaken using Cadna-A noise modelling software. This is also the opportunity to present a modelling software package, which is not widely used in Australia at present. Modelling results outlined the noise control measures to be integrated in the plant design in order to meet a set of stringent noise criteria at the nearest sensitive receivers.
Arup Acoustics, Sydney, Australia
ABSTRACT
This paper presents a review of acoustic criteria currently used in office buildings with the aim of determining more satisfactory indoor noise level criteria for naturally ventilated office buildings. Indoor air quality standards related to the use of natural ventilation in buildings conflict with the control of ingress of external noise through ventilation openings to meet internationally recognized background noise limits for building use. These noise standards generally assume, however, that buildings are sealed and air-conditioned, which contributes to meeting the stated recommended indoor noise levels. It is not feasible that these noise standards can be expected or are appropriate to be achieved in naturally ventilated buildings. Therefore, to account for the thermal comfort benefit of natural ventilation and the ability to locally control natural ventilation and noise levels by closing of windows, a controlled increase of the currently recommended indoor noise levels is explored, based on a review of typical conditions found in existing naturally ventilated buildings. To develop appropriate acoustic criteria for naturally ventilated buildings, consideration is given to adequate speech intelligibility of conversations and also to distraction to typical office activities.
(1) Finegold & So, Consultants, Centerville, Ohio, USA (2) Stockholm Environment Institute, Heslington, York, UK
ABSTRACT
This paper examines the issue of whether current Western noise policies would be effective and appropriate for use in developing and emerging countries. Differences in noise sources, available finances and noise control technologies, cultural norms, climate, views concerning the role of the government, etc. make it possible that different approaches might be needed in developing countries and emerging countries in order for their noise policies to be effective. It describes the current status of an international consortium of scientists and engineers, government representatives, and key stakeholders which are being organized to address this important topic. The World Health Organization and the International Commission on Biological Effects of Noise will be major partners in implementing the envisioned consortium. This paper presents a concept for a Strategic Approach to Environmental Noise Management in Developing Countries which has been developed by the Swedish Environmental Institute (SEI) to provide a foundation for the proposed international consortium. It also describes the 2010 and future plans for annual Workshops, synmposia and special sessions at acoustics Congresses, and to promote the evolving SEI concept with the governments of developing countries. The 2010 effort includes a web-based Forum sponsored by Tsinghua University in Beijing and additional projects are being planned.
Wilkinson Murray, Sydney, Australia
ABSTRACT
Large industrial ventilation fans were causing complaint at nearby residential receivers. This paper discusses the various approaches to minimising noise levels, including a 12Hz infrasound, ultimately reduced using a tuned quarter wave tube. Part of the process involved building a 10th scale model, where various solutiuons were trialled before implementation on site.
Daewoo Engineering & Construction, Seoul, Korea
ABSTRACT
Disputes caused by construction noise has been increased in Korea especially Seoul metropolitan city. In this study, current status of noise level and noise control measures on construction sites were investigated. Acoustical performance of developed temporary noise barrier to reduce construction noise was evaluated in the construction site as well as laboratory.
EMGA Mitchell McLennan Pty. Ltd., Sydney, Australia
ABSTRACT
In 2009 we embarked on a field study to ensure that the accuracy of our predictions of a future large scale industrial facility is improved on standard techniques. Noise propagation is significantly affected by prevailing meteorological conditions. Several standard modelling methods rely on measured meteorological data and estimation techniques. We decided to obtain realistic and actual noise level data including the effect of atmospheric conditions by conducting an experiment on sound propagation. Loud speakers were placed at a central location on a site, and used as an artificial sound source. A constant sound signal of a set of pure tones with varying sound intensity levels between each frequency is constantly producing sound at a fixed emission level for several hours at a time each night. The primary frequencies in the source signal were chosen to adequately simulate the main frequency range of machinery typical of the facility. The transmitter consists of a CD player with a CD containing the source noise, a power amplifier and four large loud speakers. The arrangement is powered by a petrol generator, all located in an open area. The sound was recorded by acoustic consultants at distant off-site locations, as well as at near-filed positions to the speakers. There were three personnel conducting measurements simultaneously, each with a Type1 narrow band analysers. The operators collected random samples of at least 5-minute duration at various locations and times through each monitoring period. Meteorological data is continuously collected by a three weather stations near by. Each narrow band sample was then analysed to filter the discrete pure tones from the ambient noise recorded. In the first instance the fluctuation of absolute source contribution at each monitoring site is quantified. The meteorological and noise data is correlated and analysed to quantify the effects of weather on noise propagation. These measurements are compared to predictive output from a detailed three-dimensional model. The comparison shows interesting divergence of results but with encouraging correlation in noise levels on average.
(1) Dept. of Environmental and Hydraulic Engineering, University of Pavia, Pavia, Italy (2) Dept. of Mechanical Engineering, University of Salerno, Fisciano, Italy
ABSTRACT
In the urban health evaluation, the environmental comfort (thermo-hygrometrical, acoustical and lighting comfort) represents a fundamental aspect to quantify the influence of the climate and of the human activities on the men's health. The correlations among outdoor comfort, urban landscape and architectural features offer wide perspectives, as they could represent very useful means to give a qualitative and quantitative judgment on the existing real estate or to choose the major actions for the restoration of the urban environment. To obtain urban health global judgment, all the playing factors must be correctly weighted and associated. An analysis based on homogeneous quantities is needed, avoiding multiplying judgment scales and using, for example, one scale for each involved parameter comparable with the others. To quantify and correlate the various environmental elements, a first attempt was performed aimed at a global assessment on the basis of a mutual judgment system. Therefore, a proposal of schematisation based on indicators and indexes has been developed. With the definition of them, relating to the acoustical outdoor comfort evaluation, it is possible to obtain a global quality environment evaluation considering other aspects like, for example, thermo-hygrometrical and atmospheric pollution effects. First analyses to validate the method on the basis of experimental sound pressure level data have been developed. Moreover, experimental data about noise levels, traffic and population density were considered as basic parameters on which it is possible to develop single indicators and a global outdoor environmental index. In the meantime a subjective investigation was performed, to correlate the values of the acoustical index to the individual sensation of the people living in the surroundings.
Brisbane City Council, Queensland, Australia
ABSTRACT
In order to develop socially relevant noise policy it is necessary to understand the health effects of noise exposure and the corresponding cost of these effects. These social costs can then be considered against the economic costs of implementing noise mitigation measures. The study presented in this discussion illustrates a method of cost benefit analysis for noise mitigation. The response relationship between transport noise (road, rail and aircraft) and health effects (annoyance and sleep disturbance) are well documented. These health effects place a burden on society.
Recent health studies have enabled quantitative analysis of the burden of disease on populations. The World Health Organization's Disability Adjusted Life Year' metric can be used for this purpose. In the case of transport noise, it is necessary to quantify the disability' caused by noise related annoyance and sleep disturbance. Preliminary studies have made attempts at using this approach. A basic economic analysis is considered in this study by attributing a human capital' value to Disability Adjusted Life Years. The benefits in terms of human capital can then be compared to the economic costs of providing noise mitigation measures. The techniques discussed above allow policy makers to attempt to determine a triple bottom line' where environmental, social and economic outcomes are taken into account. This study has considered: (1) The annoyance and sleep disturbance effects of road traffic noise; (2) The health effects related to road traffic noise in terms of Disability Adjusted Life Years; (3) The costs associated with including noise mitigating construction materials in new buildings; and (4) The economic benefits associated with improved levels of health and improvements to property values. A case study of how this has been used in Brisbane for planning residential development near transport corridors will be presented.
(1) Universidad da Amazônia - UNAMA, Brazil (2) Instituto de Acústica de Madrid - CSIC, Spain
ABSTRACT
At present, noise maps are the most powerful tool available to assess the acoustical environmental state in inhabited areas. Although they are usually based on calculation, their complexity makes it necessary to use a set of measurements to "calibrate" the calculation. As a general rule, the more points measured the more accurate the model will be. When applying this technique a regular grid of measuring points has little sense. Only a small set of points will be used and they have to be chosen according to how representative there are within the set. Currently, methods for selecting of the measuring points are based on the type or category of the street. From a statistical point of view, these methods can be interpreted as a kind of stratified sampling of the acoustical population where the strata are defined in terms of the land use. In this way, one can focus on relevant town areas that will have a significant effect on the efficiency and accuracy of the estimation. However, the street categories will not always be directly related to the noise levels, as the strata levels depend not only on acoustical parameters but on other qualitative parameters (for example, street activity). Assuming that each noise level can be assigned to a category and that the set of all noise levels can be divided into categories, the noise level distribution can be defined as a mixture of distributions from the different categories. In this paper, a set of "describing" distributions within the noise levels of a town are identified (on a district-by-district basis). Patterns from the noise levels distribution based on these distributions are identified. These patterns could be useful in identifying the street categories within a town, so that a more accurate stratification sampling could be implemented and, in addition, the selection of relevant stratification variables will be improved.
1. Civil Engineering Graduate Program - Rio de Janeiro Federal University - PEC/COPPE/UFRJ, Rio Janeiro, Brazil 2. Federal Center of Technological Education of Rio de Janeiro - CEFET-RJ
ABSTRACT
This work is concerned with development and application of methodology including the affected communities' noise perception as a parameter for airport noise environmental impact studies. In Brazil, until now, airport noise environ-mental impact studies have been based firstly on Noise Zoning Plans and simulated noise contours from the Integrated Noise Model software as well as on noise measurement at external selected points aiming to characterize aircraft noise contribution related to background noise guidelines of Norm ABNT 10151 must be followed. To date, the airport noise perception of affected residents is not included as a parameter for environmental impact assessment. Since March 2009 the neighborhoods communities annoyed by the landing and takeoff noises from Santos Dumont Airport have been insisting actively joint to environmental control state institutions to solve the problem that was becoming worse due to the expansion of airport operations. At that same time this author began the implementation of noise annoyance social research on Santos Dumont Airport neighborhoods. In the first stage of the work interviews were conducted in about 70 different addresses distributed on five distinct districts as a purposeful sample of residents contacted through residents associations. The interviews were conducted by undergraduate students trained by this first author throughout 40 hours lessons course for developing field research skills. A carefully elaborated questionnaire applied during interviews and the data collecting methods are described in this paper. At the second stage of this work noise measurements at selected points will be carried out according to the noise annoyance social survey collected data results tabulation and analysis aiming to configure a complete social-acoustic survey in the near future.
DataKustik GmbH, Greifenberg, Germany
ABSTRACT
Noise prediction methods must include the mathematical description of many physical phenomena influencing sound propagation. More scientific based methods approximate the solution of the wave equation with given boundary conditions, while the engineering methods simulate the wave propagation by geometrically defined rays. The first mentioned scientifically based methods are powerful to investigate certain effects as propagation in a layered atmosphere or diffraction over a complex barrier in a simple and clear defined environment, while the engineering methods are clearly superior in realistic complex scenarios like industrial facilities or built up areas in cities with thousands of traffic sources. The techniques applied have been improved in the last years and the most important of these improvements are presented and explained.
Queensland Department of Transport and Main Roads, Queensland, Australia
ABSTRACT
This paper presents an overview of the Queensland Department of Transport and Main Roads (TMR) draft Construction Management Code of Practice: Part 1 Noise 2010. The Guideline sets noise and vibration limits for the contractor and provides guidance on source, pathway, and receptor noise control options. One of the biggest challenges facing urban roadway and tunnelling construction projects at present is the need to mitigate environmental noise and vibration impacts. The general approach adopted by this guideline is one of minimising overall disruption from road and tunnel construction operations. Disruption refers to effects on people, their activities, property and environment associated with road and tunnel construction activity and can occur as a result of works within the road reserve, materials processing at temporary fixed facilities, truck movement on off-site haul routes and effects on general traffic and utilities within the wider area. A two level hierarchy of controls is adopted - standard controls and project specific controls. The guideline's intent is to address noise and vibration pro-actively whenever possible; to anticipate and avoid creating undue noisy and undesirable vibration conditions, but also to allow proper mechanisms to control noisy conditions without sustaining costly claims from contractors.
Department of Transport and Main Roads, Qld, Australia
ABSTRACT
Low frequency noise (LFN) is common as background noise in urban environments and as an emission from many artificial sources: road vehicles, aircraft, industrial machinery, artillery and mining explosions, and air movement machinery including wind turbines, compressors, and indoor ventilation and air conditioning units. LFN may also produce vibrations and rattles as secondary effects. The effects of LFN are of particular concern because of its pervasiveness due to numerous sources, efficient propagation and reduced efficacy of many structures (dwellings, walls, and hearing protection) in attenuating LFN compared with other noise. Current transportation noise impact assessments are usually based on broadband A-weighted noise indicators. Over the past 50 years, the A-weighted sound pressure level (dB(A)) has become the major measurement descriptor used in noise assessment. This is despite the fact that many studies have shown that the use of the A-weighting curve underestimates the role that LFN plays in loudness perception, annoyance, and speech intelligibility. The de-emphasizing of LFN content by A-weighting can also lead to an underestimation of the exposure risk of some physical and psychological effects that have been associated with low frequency noise. As a result of this reliance on dB(A) measurements, there is a lack of importance placed on minimizing LFN impacts. A more complete picture and better correlation with annoyance and health effects may result from indicators that include temporal aspects and frequency character. This paper presents an overview of some examples of low frequency indicators applied to transportation sources
Department of Building Services Engineering, The Hong Kong Polytechnic University, Hong Kong, P.R.China
ABSTRACT
Natural ventilation and acoustic protection are two conflicting issues. In densely populated cities, open window is nearly not possible because it can provide a path for noise to break-in from external into the buildings. A special win-dow system, namely the plenum window, was investigated in this study for its acoustical insertion loss when the win-dow was not parallel to the road in the urban environment whilst allowing a certain degree of natural ventilation. The acoustic performance of plenum window was studied using a 1:4 scale model and a 5m long source array consisted of 25 six-inch aperture loudspeakers. The positions of sound source from the window are found to be significant in pro-tecting transportation noise based on the trend of insertion loss spectra. The insertion loss of the device was defined by the difference of the average noise level inside the receiver room between opened window and plenum window. There was around 7dBA variation in the insertion loss over the range of source angle studied. The highest insertion loss was about 11dBA and was obtained when the window was nearly parallel to the simulated line source. Plenum window was believed to be a good acoustic window with high practicality in the densely populated urban environ-ment.
Tasmanian Department of Primary Industries, Parks, Water and Environment, Australia
ABSTRACT
Environmental noise, the name given to sound produced by the activities of humans, is regulated by legislation that aims to achieve an acceptable balance between activities that emit sound energy and activities or situations that exhibit sensitivity to sound. Much of this legislation, or its legal instruments, draws on quantification provided by A-weighted sound pressure levels. There are many instances where the A-weighted level does not provide a particularly realistic measure of impact and there have been some adjustment schemes established to account for attention-attracting characteristics such as tonal features and various forms of modulation.
This arrangement has been used over many decades and currently forms the basis of much environmental noise impact assessment. The ultimate intent, to either minimise sound pressure levels or the annoyance that it causes, is still somewhat unclear. There are situations where the apparent level of impact appears to be inconsistent with the appropriately adjusted A-weighted sound pressure level and there have been some strong criticisms directed at the approach. Some of the advantages and disadvantages of this approach are discussed, particularly in relation to sound emitted from industrial activities. The assessment of highly complex sounds and sound propagation regimes in relation to a range of noise sources and receiving environments suggests the need to draw on a wider range of analysis methods. This has ramifications for the formulation of legislation intended to control the sources of environmental noise.
(1) School of Architecture, Tsinghua University, Beijing, P.R.China (2) Beijing Zhongya Kangyuan Environment Protection Co. Ltd., Beijing, P.R.China
ABSTRACT
The underground storage stations are routine facilities in pipeline natural gas transportation for pressure balance nowadays in China, which employ compressors generating tremendous noise often disturbing the neighborhoods. This paper introduces a noise control case in the largest natural gas storage station in the east China. There are three 3560kW gas-driven compressors with 910km3/s ventilation air coolers, which manufactured by Caterpillar. The legal noise limitation at 1m from station boundary (nearest distance from the compressors and the coolers is 14 m) was constrained under 45dB(A) while the noise level of the each compressor(1m distance) is 105 dB(A) and that of air cooler is 85dB(A). To reduce the noise, the compressors were covered by a sound reduction workshop with both light weight constructions and enough compulsive ventilation, and the coolers were surrounded by both parallel absorb panels and sound barriers, as well as the engine tail pipes were mounted new quiet mufflers. The final noise level was reduced to 43dB(A) without any influence on the station normal running. This paper discusses more details in the balance of the noise reduction, the lightweight sound isolation constructions (for exploding discharge) and the favorable ventilation, which had been designed rationally and examined through scientific simulation experiments.
Osaka Institute of Technology, Japan
ABSTRACT
Setting and keeping an environmental noise evaluation index is essential to realize comfortable environment. The index should be easy to handling and express sensation of human beings well. LAeq is employed as a standard index for environmental noise evaluation at present. This index is useful index because the value is calculated by averaging the energy of sound pressure level and could be obtained from an integrating averaging sound level meter. However, this index is sometimes reported not to express sensation of human beings so much. In this study, we considered a new environmental evaluation index which can express the sensation better than LAeq and could be calculated as easy as LAeq.
In the experiment, we measured 7 kinds of traffic noise using a standard microphone and calculated LAeq. Next, we reproduced the measurement traffic noise to a subject for subjective evaluation test of loudness. As results the correlation coefficient between LAeq and subjective loudness was not high enough (0.65). Then, we considered the signal percept by a subject is affected by the head shape (head related transfer function), so the signal measured by the standard microphone is not the same as the signal percept by the subject exactly. And we used headset type microphones instead of using standard microphone to obtain the signal affected by the head shape. Also, we considered when the subject evaluates loudness of an environmental noise, the feeling of annoyance affects the loudness sensation. Then, we employed 40-noy frequency weighting filter (D characteristics) in equal noisiness contour, which expresses how degree a human being feels annoyance to a noise according to the noise frequency band instead of employing A weighting filter. We calculated a new index (LhDeq) using headset type microphones and employing D characteristics filters. The calculation method was the same as that of LAeq except for using standard microphone and applying A weighting filter. As results, the correlation coefficient between the loudness and LhDeq was 0.92. The value was much higher than that of LAeq (0.65).
Renzo Tonin & Associates, Surry Hills, NSW, Australia
ABSTRACT
Road traffic noise impacting the occupants of residential dwellings has become a major concern for the community in recent years. Given that in many existing situations there are often many physical constraints which prevent noise mitigation measures from being applied at-road or along property boundaries, reliance is placed upon reducing noise at a building's envelope to achieve acceptable internal levels for building occupants. Therefore, a better understanding of how and to what extent a building is able to reduce traffic noise intrusion, is imperative if one is to successfully design suitable noise controls for the benefit of building occupants, in particular in sleeping and living areas of dwellings. Determining the level of reduction in traffic noise achieved between external and internal areas of a building following the installation of a range of acoustic treatments applied to various residential buildings, can assist designers in selecting suitable types of acoustic treatment for a range of building constructions impacted upon by noise from roads with varying traffic carrying capacities.
Noise surveys were undertaken at various residential sites in NSW with measurements conducted concurrently both externally and internally for buildings of various constructions (eg light-framed, brick veneer, double brick, etc). The results were used to determine the degree of noise reductions achieved from the building envelope before and after acoustic treatments were implemented. The results of noise measurements are presented herein to provide an understanding on the level of noise reductions achievable with certain types of acoustic treatments for different types of building constructions as impacted upon by noise from a range of roads with varying traffic carrying capacities.
Parsons Brinckerhoff, Australia
ABSTRACT
There is a limit to the number of vehicles that can travel on a road section per hour. Vehicle speed will decrease as the road becomes congested. In Queensland, road traffic noise emissions are based on the Calculation of Road Traffic Noise method (CoRTN). This method is dependent on both traffic volumes and speed, among other parameters, how-ever in the current calculation method, speed is assumed to be constant. By incorporating the inherent traffic speed constraints a more accurate method of calculating the L10, 1hr is obtained.
Ray W. Herrick Laboratories, Purdue University, Indiana, USA
ABSTRACT
It is well-known that acoustical modes exist in tire cavities. Previous research on tire cavity modes has focused on the transmission of structure-borne noise to the vehicle interior due to the force that the tire cavity mode exerts on the wheel hub. In contrast, here the major concern is the identification of the tire surface vibration and the sound radiation from the tire surface that can be attributed to the tire cavity mode. The surface normal vibration of a point-driven tire has been measured over a complete circumference by using a scanning laser Doppler vibrometer. When the space-frequency data is transformed to the wavenumber-frequency domain, a clear feature that can be attributed to the tire cavity mode becomes visible. Although the magnitude of the surface vibration resulting from the tire cavity mode is small, its radiation efficiency is high owing to the high phase speed of the acoustical waves that create the tire cavity mode. It has also been found, that, as expected, tire vibration features associated with the tire cavity mode disappear when the tire is filled with fibrous, sound absorbing material. Finally, measurements of sound radiation from a tire driven by a steady-state-, point-input, and from a tire driven by a uniform impact over the contact patch area are presented, and the features associated with the tire cavity mode are highlighted.
Renzo Tonin & Associates (NSW) Pty Ltd, NSW, Australia
ABSTRACT
This paper provides an update of information presented in a paper written for the AAS Acoustics 2008 conference in Geelong, Victoria. In particular this paper presents results of traffic noise modeling using CadnaA and SoundPLAN and compares both to noise measurements for three large recent road projects in NSW. CadnaA is a well known and internationally accepted noise modelling package, and its acceptance and use in Australia amongst acoustic profes-sionals is growing fast. To assist the Australian acoustical profession, the appropriateness and accuracy of CadnaA under Australian conditions is currently being verified, and this paper presents actual project results for this purpose.
Unlike CadnaA, the SoundPLAN noise prediction model is extensively used in Australia, particularly for road traffic noise predictions, and has been recognised and accepted nationally by various regulatory authorities including the major road authorities and environmental agencies. The aim of this paper is to provide additional comparative data for predicted traffic noise levels using the Calculation of Road Traffic Noise (CoRTN) algorithms as implemented by SoundPLAN and the CadnaA noise models for three large recent road projects in NSW. These three projects offer features and characteristics that differ significantly from the projects reported in the 2008 paper. Results from this study re-confirm that the CadnaA noise modeling package is accurate and effective for modelling road traffic noise in Australia.
(1) Universidade Estadual de Maringá, Brazil (2) Universidade Estadual de Campinas, Brazil
ABSTRACT
The city of Maringa, Parana State, Brazil, recorded in recent years a large increase in its fleet of vehicles. As consequences it were observed, among other problems, the loss of traffic flow in certain regions and the increase in the noise generated by vehicular traffic. To mitigate these problems the city government implemented the binary system of traffic in these regions which brought changes to the acoustic urban scenery. On campus of the State University of Maringa, is installed a language school, which is distant about thirty meters of an street that had its sense of traffic changed to the adequacy of the binary system of traffic. This research consisted initially on monitoring and mapping of noise in surroundings of this language school before and after the change of the direction of traffic. It aims to determine what were the changes in the acoustic setting of the site. Besides was determined whether these changes were beneficial to the establishment of greater acoustic comfort to the people who live or go to this space. With this analysis it was found that the changes in traffic led to an increase in noise level measured at the site, and it does not meet the criteria established by the city. A study was conducted to adapt the sound level that reach the school of languages through simulations of noise barriers. Finally, it was presented a proposed acoustic barrier to be built on the site in order to provide to the users a place with a noise level appropriate to its activities.
AECOM, Adelaide, SA, Australia
ABSTRACT
When predicting noise emissions from a road utilising the CoRTN model, including as implemented in SoundPlan software, unexpectedly high noise results can occur due to a receiver located on the outside of a curved section of road. This can impact on traffic noise barrier designs, and may result in unnecessarily high traffic noise barriers for a potentially unintended consequence of the CoRTN model. Reducing the search radius from the default distance in the SoundPlan calculation module can result in a significant decrease in the noise level predicted for these receivers. This paper presents a brief overview of the implementation of the CoRTN model and the results of measurements undertaken on vehicles travelling at 100 km/hr. It seeks to determine the difference in sound power level between cars travelling head-on versus side-on relative to a receiver. Furthermore, the results were used to determine an appropriate search radius to use when implementing the CoRTN model in SoundPlan software.
Road Planning and Design Branch, Engineering and Technology Division, Department of Transport and Main Roads, Queensland, Australia
ABSTRACT
Feasibility and reasonableness of noise barriers are terms commonly found in road traffic noise management protocols of various road authorities. They arise in road traffic noise management in recognition that is not always possible build a noise barrier that attenuates road traffic noise to be within project criteria at all noise sensitive receivers. Feasibility is related to engineering perspectives such as safety (for example, road users, pedestrians and cyclists), maintenance, space limitations, drainage, road access locations, locations of services and structures and most importantly topography. Reasonableness reviews the practicality of a noise barrier under site specific circumstances and includes data from acoustic assessments, cost considerations, community consultation and aesthetics of the streetscape. This paper does not consider reasonableness. The feasibility test must be passed prior to consideration of reasonableness and this paper presents a geometric method which can be used during the acoustic assessment and road design process to assist in determining feasible locations for noise barriers. The use of such a method during the road design process will improve road geometries to assist in road traffic noise management. This paper reviews, (a) the acoustic fundamentals of noise barrier design, (b) some structural engineering aspects of noise barrier design, (c) combined effects on noise barrier location from acoustics, structural engineering and road design; and (d) the proposed geometric method of determining a noise barrier feasibility rating followed by some examples.
Road Planning and Design Branch, Engineering and Technology Division, Department of Transport and Main Roads, Queensland, Australia
ABSTRACT
A sound power level survey was conducted of vehicles on Queensland roads to produce a database of vehicle sound power levels categorised on vehicle classification, speed, pavement surface type and driving conditions. The purpose of the study was to compare the local vehicle sound power levels with similar surveys conducted in Europe in appli-cation of the Nordic and Harmonoise prediction methods. This paper presents the methodology employed in the study and the locations measured and also outlines the results and analysis.
IOA, Institute of Acoustics, UK
ABSTRACT
This paper is an update of recent proposed enhancements to the noise barrier design specification standards for road highways in the European Union. With the growing importance of value management and ongoing barrier mainte-nance becoming an increasingly costly exercise, the use of durable low maintenance noise barrier systems is becom-ing essential. These proposed changes would be made to ensure that the reduction in noise emissions from highways can be sustained for the life of a barrier through the specification of effective and durable noise barrier designs. Changes include: 1) Defining higher categories for the specification of acoustic performance for tall barriers both in terms of sound absorption and airborne sound insulation, 2) Requiring outdoor noise testing of all barriers under di-rect sound field conditions instead of the classical indoor laboratory test regime, 3) The potential use of in situ acoustic testing of barrier durability as a tool for barrier maintenance and asset management
(1) NSW Department of Planning, Sydney, NSW, Australia. (2) JW Acoustic & Air Consultancy, Sydney, NSW, Australia (3) TEF and Visiting Fellow, University of New South Wales, Sydney, NSW, Australia
ABSTRACT
A number of 'rules of thumb' exist which allow quick and simple comparison between different noise indices associated with road traffic noise, for example L10 (18h) = Leq(24h) + 3.5 dB (Brown, 1989). Most of these rules of thumb were established many years ago and it is an objective of the present paper to assess if these are still valid in 2010. In addition, an extensive data set has been interrogated to investigate the morning shoulder period between 6am and 7am when there is a significant increase in road traffic noise on many urban roads. The implications of including the morning period as part of an Leq(9h) night or an Leq(16h) day are discussed.
Kumamoto University, Japan
ABSTRACT
Noise pollution due to road traffic is a major global concern because of its negative impact on the quality of life in communities. Vietnam is a developing country in Southeast Asia, and its environment has been seriously affected by industrialization and urbanization. In large cities like Hanoi and Ho Chi Minh City, noise emission from road traffic has been found to be a serious concern among general public. Nevertheless, Vietnam has not yet developed a practical noise policy and countermeasures to cope with the situation. Two large-scale socio-acoustic surveys of community response to road traffic noise were conducted in order to investigate people's reactions to road traffic noise in Hanoi and Ho Chi Minh City in 2005 and 2007, respectively. One of the main objectives of this study was to accumulate noise and social survey data for Vietnam and to investigate the dose-effect relationship for community noise annoyance. This study also enriches the global discussion on noise and its effects on humans.
Indian Institute of Technology, Madras, India
ABSTRACT
Traffic noise characteristics in cities belonging to a developing country like India are varied slightly by virtue of the fact that the composition of the traffic is heterogeneous associated with variance in road geometrical features, surface characteristics, honking conditions and varying density of the building on the either side of the road. To study the propagation and spread of the traffic noise in some of the areas a noise mapping study has been attempted along with field measurements of L10, L50, L90 and Leq. In the noise mapping parameters such as Ld, LN, Lden have been arrived at by taking into consideration the geometrical features of the roads and varying heights of the buildings. In this study noise mapping through computer simulation model (soundplan software) is used by considering several noise sources and propagation of noise to the receiver point. Some of the prediction models such as U.K's CRTN, U.S's TNM and their modified versions have a limited applicability for heterogeneity. Therefore a separate multiple regression model is discussed to suit the heterogeneous traffic conditions for noise mapping purposes.
(1) Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden (2) Belgian Road Research Centre (BRRC), Brussels, Belgium (3) Technical University of Gdansk (TUG), Gdansk, Poland
ABSTRACT
It has been suggested recently that vehicles, driven in electric mode, either hybrid or pure electric vehicles, are so quiet that they constitute a safety hazard for pedestrians and bicyclists in traffic. It is claimed that such vehicles are not acoustically perceived due to the power unit being exchanged from a combustion engine to electric motors; something that essentially cuts away all power unit noise and leaves tyre/road noise, the latter of which is the same as for similar-sized vehicles with combustion engines. There are currently a number of fast and concerted actions by the US and Japanese governments as well as within international bodies such as UN/ECE and ISO, with the expected outcome that "minimum noise" of vehicles shall be measured with a standard method and legal limit values for such "minimum noise" shall be established. The paper present findings regarding possible traffic safety effects of quiet vehicles and concludes that only a US study has identified such effects. A critical review leads to the conclusion that this study may be biased and needs confirmation by further research. After reviewing data from noise measurements in Japan, the authors present own previously unpublished data on noise emission levels for road vehicles which may be considered as "quiet". Special concern is given to noise at speeds below 20 km/h where it is expected that the problem might be the worst and where previous data are missing. It is concluded that already a significant number of our present internal combustion engine vehicles are so quiet at low speeds that normally one cannot hear any difference between an electric and a normal vehicle in an urban area. Tyre/road noise is the dominating noise in most cases where a light vehicle is driven at speeds at or above 15-20 km/h (heavy accelerations are the exceptions), and this is the same whether the vehicle is electric or not. Thus, it is a property of our vehicle fleet which we have had for more than a decade, and few have considered that as a safety problem. Therefore, there is not enough justification for equipping our future quiet vehicles with extra artificial noise or warning sounds. If needed at all, there are better options which are non-acoustical.
(1) DIENCA-CIARM, University of Bologna, Italy (2) IED, University of Parma, Italy
ABSTRACT
Barriers employed for road traffic noise reduction can be characterized by two indices: reflection index for sound reflection and insulation index for airborne sound insulation. They can be measured following the method described in CEN/TS 1793-5 standard, based on impulse response measurements employing a pressure microphone. The method mandates for averaging results of measurements taken in different points in front of the device under test and/or for specific angles of incidence, employing the obsolete MLS signal for performing the measurements.
However, the CEN/TS 1793-5 standard presents some geometric problems, which could arise if the barrier does not reach a minimum height or if it has a very rough (scattering) surface. During the reflection index measurement on a barrier of limited height, the sound signal can hit the ground or go over the edge of the barrier, compromising the fairness of the whole result. Also the insulation index can be affected by the height of the noise barrier, since the sound could pass above the device under test, if it does not meet minimum geometrical bounds. It has been noticed how these practical problems, jointly with the assumption of a surface reflecting specularly in the final formula, can significantly over/under estimate the laboratory values of both the indices. Results of in situ tests based on CEN/TS 1793-5 will be shown in comparison with results obtained through a different approach, based on sound intensity measurements, and with the traditional tests performed in the lab.
VANKEULEN advies bv, The Netherlands
ABSTRACT
Low-noise pavements have proven to be very effective and efficient reducers of traffic noise. This reduction is normally expressed in (spectral) noise level differences with unit dB(A). However, noise reductions of low-noise pavements can differ significantly from the net noise reductions measured at the adjacent facades. On the other hand, recent research has shown that the subjective impressions of low-noise pavements roads often seem to contradict objectively measured noise level reductions. A criterion derived from psychophysics has been developed to determine the effectiveness of low-noise pavements. In two cities in the Netherlands, (test) tracks with low-noise pavements have been constructed. First, standard SPB and CPX measurements have been carried out along with measurements outside and inside the adjacent dwellings. Second, psychophysical analyses based on the normalised CPX results have been carried out. Third, all inhabitants in this street filled in a questionnaire concerning their impression of the improvement of their situation due to the low-noise pavements. The results of the psychophysical analyses and questionnaire seem to corroborate well, however, in one case the questionnaire has led to biased results due to changed safety conditions and acoustical conditions caused by sources other than road traffic.
Lochard Ltd, Caulfield North, Victoria, Australia
ABSTRACT
Aircraft noise monitoring involves separately distinguishing and characterising the sound produced by aircraft from the residual (background) sound. The standard process involves the application of a threshold to divide the two classes of sound. Many authors have pointed out the various possible errors in this process and have endeavoured to find estimates for the errors. It has also been pointed out that an important element in the process is to recognise that there is always a third class - the "uncertainty class" - for which it is not possible to ascribe the sound either to aircraft or to background. Such sound must be accepted as unknown and unknowable. In this paper we investigate some of the methods that can be applied to improve the accuracy of characterisation. These include the application of neural networks for recognition of individual one- or half-second samples, dual and fuzzy thresholds in relation to the uncertainty class, spectrally derived information and dynamic loudness to distinguish aircraft from other sound. Comparisons with results based on recordings from installed noise monitoring systems under normal operating conditions will be presented.
Institute of Sound and Vibration Research, University of Southampton, Southampton, UK
ABSTRACT
Acoustic liners placed in the intake and bypass ducts of turbofan aeroengines have played a significant role in mitigating fan noise over many decades. In the case of large commercial aircraft powered by high bypass ratio turbofan engines, the fan stage is the principal source of turbomachinery noise, and a principal contributor to whole aircraft noise. Typically a turbofan liner is formed from single or double layers of honeycomb material which are separated from the flow and from each other by porous sheets, and fixed to a rigid backing sheet. The performance of installed liners is assessed by the extent to which they reduce whole aircraft noise, as measured by Effective Perceived Noise Level (EPNL) at three noise certification points. The selection of physical liner parameters (depth, hole size, open area etc) to reduce EPNL is a complex task which requires the following components: 1) An impedance model which can translate physical liner parameters, such as honeycomb depth, facing sheet percentage open area, hole diameter etc, into resistance and reactance at the surface of the liner. 2) A source model for the fan stage which defines the modal content of the sound field propagating away from the fan stage. 3) A propagation model that predicts sound levels in the intake and bypass duct taking into account absorption by acoustically treated segments of the duct wall. 4) A radiation model which propagates the acoustic disturbance to the far field, and predicts the directivity of the resulting sound field. 5) An optimization procedure which embeds steps 1,2,3 and 4 within an EPNL calculation for the whole aircraft (including other noise sources) and determines the optimal values for the liner physical parameters at each certification point.
The study presented in this paper reviews the extent to which it is now practicable to use CAA tools to perform steps 3 and 4 in optimizing real liners on industrial timescales. The main obstacle in doing so is the time required to compute the large numbers of radiated sound fields which are needed for liner optimization. Such solutions must be generated for multiple frequencies extending large Helmholtz numbers, for multiple engine operating conditions and must span a large design space in terms of liner construction variables. Two different approaches will be demonstrated and an automated procedure using current CAA codes and optimization tools will be shown to be feasible
ISVR, University of Southampton, Southampton, UK
ABSTRACT
This paper presents an analytical study of the sound power radiated from a two dimensional flat plate airfoil in a turbulent stream. A classical approach for describing analytically the response of a flat plate, with a finite chord, to the
impingement of turbulence is extended to be valid at all frequencies. Analytical asymptotic expressions, valid at low and high frequencies, are provided for the upstream, downstream and total sound power. A study of the effects of chord length on the total sound power at all reduced frequencies is presented. The isolated airfoil model presented in this paper will be subsequently used as a benchmark to study the effects of cascade in broadband interaction noise of fans.
The Acoustic Group, Lilyfield, NSW, Australia.
ABSTRACT
In Australia persons preparing aircraft noise impacts utilising the INM are not normally acoustically trained. Therefore they do not necessarily know what the output means noisewise. It is up to the acoustician to train the INM programmers. Over the last ten years the author has had to get INM to agree with actual measurements thereby overcoming the failings of INM. This paper looks at the various modifications to the NPD dataset that have been required to get INM to work. What about ANM, will it work or is it too expensive?
The Acoustic Group, Lilyfield, NSW, Australia.
ABSTRACT
Widespread criticism of the ANEF system to predict aircraft noise impacts has lead to the use of supplementary tools (N70,N70 X+ ,TA, daily ANEF). Do these tools work? Are they of benefit to the community or the aviation industry? Do they add confusion? Are they appropriate for military aircraft or general aviation?
The Acoustic Group, Lilyfield, NSW, Australia.
ABSTRACT
The Australian Department of Defence has a responsibility to provide accurate noise predictions and verification of same. Noise and Flight Path Monitoring Systems (NFPMS) have recently come on line for two bases. These systems have to be superior to general NFPMS so as to track military aircraft and circuit operations. The exciting possibilities of the NFPMS to validate INM/NoiseMap predictions and ascertain variations in noise exposure are discussed.
Massey University, Wellington, New Zealand Wellington International Airport Ltd, New Zealand
ABSTRACT
New Zealand has a protocol for aircraft noise management that really works and has public acceptance. It is based on the "airnoise boundary" concept, which was conceived by the lead author in 1987, and in 1992 was incorporated into a New Zealand Standard for airport noise management and land use planning. While designed specifically for aircraft noise control around airports, the concept has been successfully utilized also for the management of noise from shipping ports, quarries, transport hubs, and other industries.
The protocol is simple: If the industry cannot keep, within its property boundary, all its daily sound emission above the level recommended by the World Health Organization as requisite for the protection of public health, it has to ask the local territorial authority for permission to have a larger area in which to contain the sound. The request is discussed in the public domain and eventually an area of land is designated for this purpose and its boundary - the "airnoise boundary" - defined on a map of the area. The industry is then legally bound to keep all the excess noise within this boundary and a series of noise monitoring stations ensure this is done. In return the land inside the airnoise boundary is subject to strict land use control. Since the airnoise boundary concept was adopted for New Zealand's capitol city, Wellington, complaints that in the late 1980s numbered several hundred a year, now number less than 20, while passenger numbers have more than doubled.
(1) Centre for Air Transport and The Environment, Manchester Metropolitan University, UK (2) TAROM, Romanian Air Tarnsport, Romania
ABSTRACT
Several papers assess the implications of the airport stakeholders in managing the community noise, while the airlines are considered just a source' of noise. This paper explains a way an airline can be actively involved in reducing the noise around airports, while implementing new arrival procedures, assessing the noise exposure and optimizing flight trajectories when necessary. The airline selected is TAROM operating at its home airport, Bucharest Henri Coanda. The way an airline operates its fleet, changes flight paths or introduces new procedures will always have an impact on the airport environmental capacity. In order to understand better how an airline can fly more efficiently, a research is needed, to quantify its potential using several techniques.
Thus, TAROM decided to get involved in several projects on noise and emissions reduction, starting from 2002. The present paper reflects the analysis of some flight data collected during one of these initiatives, sponsored by Airbus. Assessing the implementation of a new landing procedure (CDA-Continuous Descent Approach) through FDR (flight data recorded) data implies a team work and extensive analysis. Trials preparation and execution involves both pilots and controllers, while the assessment of data collection is both technical (quantitative) and opinon-based (qualitative). The paper focuses on identifying the influence of optimized trajectory in reducing the noise exposure around Bucharest Henri Coanda Airport. Difficulties with CDA implementation, data availability and gaps in knowledge are also highlighted. The potential to reduce noise is also analysed, and the importance of keeping a permanent dialogue with the community, as part of a joint team airline-airport is explained.
Acoustic and Mechanical Engineering Laboratory(LEAM), Technical University of Catalonia, Terrassa, Spain
ABSTRACT
Currently aircraft noise monitoring systems use a mesh of single microphones distributed around an airport to continuously sample the noise level. This fact requires a manual process of aircraft noise event detection and classification in order to distinguish aircraft events from the rest of noise events in the recording. In the present paper a 3-meter-long 12-microphone linear array is used to automatically obtain a background noise free aircraft noise recording. The beamforming process separates the noise impinging in the array from above (potential aircraft noise) and the noise impinging from below (urban noise and reflections), the results are enhanced by the use of a trigger condition on the difference between both. The theoretical results reveals that the background noise in the aircraft noise recording can be attenuated by about 8 dB if the microphone array is optimally placed. The experimental tests shows that even in non optimal placements the array still provides better results than a single microphone if the threshold value in the trigger condition is properly set.
Institute of Sound and Vibration Research, Southampton, UK
ABSTRACT
This paper compares the measurements of the trailing edge self noise reduction obtained using sawtooth and slit ser-rations on a NACA651210 airfoil. This work is relevant to reducing the noise from aircraft engines, aircraft wings and wind turbines. A detailed experimental study conducted in the ISVR's open-jet wind tunnel reveals noise reduc-tions of up to 5 dB over a large frequency range by the introduction of these trailing edge designs. This paper presents the noise measurements for a range of jet speeds and sawtooth and slit geometries. The airfoil is at 5o angle of attack and the boundary layer has been tripped so as to become turbulent. Measurements of the static pressure coefficient distribution along the chord of the airfoil are also reported. This is to allow the effects on lift to be assessed. Noise measurements for the sawtooth serrations are compared to the theory derived by Howe. Howe's theory is extended to include a series of slits and compared to experiments. It is shown theoretically that for a sawtooth profile high levels of noise reduction can be achieved, either when the serration wavelength is smaller than the boundary layer thick-ness or when the root-to-tip distance h is larger than . It is shown theoretically that the slit serrations are not an ef-fective noise reduction treatment since the noise reduction asymptotes to zero at high frequencies. Experimental measurements of the noise reduction obtained using trailing edge sawtooth and slits are shown to be significantly less than that predicted. The noise is shown to increase at frequencies above some critical frequency, which is shown to depend only on and independent of serration geometry.
(1) Aeroacoustics and Noise Control Laboratory, School of Mechanical and Aerospace Engineering, Seoul National University, Republic of Korea (2) Center for Environmental Noise & Vibration Research, School of Mechanical and Aerospace Engineering, Seoul National University, Republic of Korea
ABSTRACT
The movement to improve the aircraft noise index from WECPNL to DENL has been arisen in the recent days in Korea. It is indispensable to determine a conversion formula of the aircraft noise index in order to bring up the outlines of the current aircraft noise regulations and guidelines for modification as a function of the revised noise index. It is essential to make full use of the past aircraft noise measurements data and aircraft noise maps in WECPNL during about 20 years in order to save the additional expense. Japan suggested that the relationship between WECPNL and DENL via unattended noise monitoring around various airports. But the airport environments and the noise level range distributions in Korea are different from those in Japan because the percentages of joint-use airport are different each other. Therefore, the current paper derives a conversion formula between WECPNL and DENL which can be adaptable to the airport environments. In doing so, the noise levels of commercial and joint-use airports are calculated in WECPNL and DENL, and compared each other using the unattended noise monitoring data around various airports to investigate and clarify the relationship between WECPNL and DENL. The unattended noise monitoring data around Gimpo international airport was analyzed to investigate and to clarify the conversion formula DENL= 0.7683WECPNL+2.2993' between WECPNL and DENL.
Delft University of Technology, Delft, Netherlands
ABSTRACT
In this paper, we hypothesize and test the ideas that (1) people's subjectivity in relation to aircraft noise is shaped by the policy discourse, (2) this results in a limited number of frames towards aircraft noise, (3) the frames inform people how to think and feel about aircraft noise and (4) the distribution of the frames in the population is dependent on structural variables related to the individual. To reveal subjects' frames of the noise situation a latent class model is estimated based on survey data gathered among a sample of 250 residents living near Amsterdam Airport Schiphol, a major international airport. In line with expectations, the results show that there are four evaluative frames of aircraft noise, three of which are strongly linked to the policy discourse. The frames are shown to legitimate different degrees of annoyance response. In turn, frame membership is influenced by two structural variables, namely aircraft noise exposure and noise sensitivity. The results indicate that in the explanation of subjective reaction to noise social factors operate discursively, while psychological factors operate within a traditional cause-and-effect model.
CFD and Aeroacoustics Department, ONERA, France
ABSTRACT
Counter-rotating open rotors were extensively studied to power aircraft in the 80s after the first increases in fuel costs. Indeed, their efficiency is greater than that of turbofans and of single propfans. They have again become a topical subject due to the recent increase of fuel costs and to the risk of oil shortage. They however raise a serious acoustic issue because loud noise can be generated not only by the two rotors but also by their interactions. A new semi-empirical method is proposed to estimate radiated sound levels. Firstly, some theoretical bases are reminded to explain that the tones which are efficiently radiated must have a supersonic circumferential phase Mach number. It is the reason why interaction tones are noisy in the certification conditions, i.e., at low advancing speed (take-off and approach). They generally exceed the sound levels due to each rotor.
Three main topics are then addressed. (1) It is shown that directivity of a tone is mainly determined by a Bessel function. Interaction tones can strongly radiate near the centerline due to low-order Bessel functions, and this increases the duration of perceived sound levels. According to published experimental data, a parabolic directivity pattern is suggested for overall sound pressure levels. (2) The shape of third-octave spectra is derived from an original argument. It is based on the large number of interaction tones which are present in each spectral band. It is assumed that the squared sound pressure is proportional to the number of tones, each of them being weighted by the intensity of the loading harmonic at its source. To do that , the decrease of the blade loading harmonics versus frequency is described by an analytical law. Spectra have to be completed at low frequency by a broadband component, but this should not greatly modify the overall sound levels. (3) Finally, overall sound intensity varies as the thrust to the power 3 due to the dipolar type of the main sources. Some corrections based on other published works are also applied for pusher propellers or to take into account an angle of incidence. The main interest of the model is to rapidly assess if certification rules are fulfilled, and to predict the possible impact of a future fleet on noise contours around airports.
(1) Institute of Acoustics, Chinese Academy of Sciences, Beijing, P.R.China (2) Institute of Electronics, Taiyuan University of Science and Technology, Taiyuan, P.R.China (3) College of Information and Electrical Engineering, Shandong University of Science and Technology, Qingdao, P.R.China
ABSTRACT
In this paper, thickness noise of hovering helicopter is analyzed. Noise of helicopter caused by main rotor is calcu-lated according to Formulation 1A derived from FW-H. Shape and configuration modification has been discussed as a noise reduction method including different airfoil and tapered tip. An uneven helicopter rotor configuration with modified shape is proposed, which has less noise than ordinary rotor configuration. Meanwhile, the thickness noise of uneven rotor has been analyzed when different modulation ratio and different modulation type including sine and co-sine mode are selected. By analyzing different rotational rate, airfoil of blade, and numbers of blade, sound pressure level and noise spectra are calculated. In addition, the effect caused by different number of grid in the calculation is compared. By comparing with these calculation results data, it shows that the method used in this paper is proper. Some useful conclusions and advices are obtained consequently. These conclusions could be used as a direction for the helicopter's rotor acoustic design.
School of Jet Propulsion, Beihang University (BUAA), Beijing, P.R.China
ABSTRACT
In 1999, Tam and Auriault developed a theory capable of predicting the fine-scale turbulence noise from cold to moderate temperature jets. In this jet noise prediction theory, they proposed a Gauss noise source model function to represent the noise source time-space correlation function mathematically. In 2005, Tam and Pastouchenko modified the noise source model function of Tam and Auriault's theory to predict hot jet noise. The calculated results of Tam and Auriault's theory are in good agreement with experimental measurements over a wide range of directions of radiation, jet velocities and temperatures. However, some noticeable deviations still can be observed between the prediction results and experimental data for some cases of single and dual-stream jets. For example, for single-stream jet of low jet Mach numbers (0.3, 0.5) and 1.0 jet temperature ratio, the prediction results deviate from experimental measurements in the high frequency range.
The main objective of this work is to improve the accuracy of the prediction results of Tam and Auriault's theory by modifying the noise source model function. Two alternative noise source model functions are considered here which were proposed by Khavaran and Harper-Bourne respectively. In addition, a frequency dependent length scale proposed by Morris is applied to the noise source model functions. The effects of above mentioned three noise source model functions are evaluated in Tam & Auriault's jet noise theory through comparison with experimental results at several jet Mach numbers and temperature ratios for single and dual-stream jets. The preliminary comparisons indicated that, for single-stream jet of low jet Mach numbers (0.3, 0.5) and temperature ratio of 1.0, in the high frequency range, the prediction results of Harper-Bourne's model function with frequency dependent length scale are in better agreement with the experimental measurements. Detailed investigation results will be provided in the full manuscript.
Ray W. Herrick Laboratories, School of Mechanical Engineering, Purdue University, West Lafayette, IN, USA
ABSTRACT
One important aspect of the operation of next generation supersonic aircraft is the potential impact that low amplitude sonic booms will have on people. Due to the quick rise of these sounds, startle responses are possible. In two previous semantic differential experiments, judgments of startle were found to be highly correlated with judgments of annoyance. In addition, judgments of loudness could not fully explain startle or annoyance ratings. The linear model predicting startle or annoyance that performed best was based on the maximum loudness and the maximum derivative of loudness, as calculated by using Glasberg and Moore's time-varying loudness algorithm. Research has been focused on improving this model of startle and to examine how physiological responses relate to subjects' ratings of startle. As part of an experiment, designed to look at repeatability of subjects' physiological responses and to examine more carefully the influence of loudness derivative on annoyance, a paired comparison test was designed. The maximum loudness and loudness derivative of the five low level sonic boom stimuli were controlled to cover a range in which the threshold where physiological responses associated with startle is found. Subjects completed two sessions, each 24 hours apart and in each session the paired comparison test was repeated three times. In each of the six paired comparison tests, subjects heard the 20 pairs of sounds and selected which sound was more annoying. The repeatability of subject judgments across all six paired comparison tests will be discussed as will the impact of loudness derivative on the judgments of the sounds.
1. Civil Engineering Graduate Program - Rio de Janeiro Federal University - PEC/COPPE/UFRJ, Rio Janeiro, Brazil 2. Federal Center of Technological Education of Rio de Janeiro - CEFET-RJ
ABSTRACT
This work is concerned with development and application of methodology including the affected communities' noise perception as a parameter for airport noise environmental impact studies. In Brazil, until now, airport noise environ-mental impact studies have been based firstly on Noise Zoning Plans and simulated noise contours from the Integrated Noise Model software as well as on noise measurement at external selected points aiming to characterize aircraft noise contribution related to background noise guidelines of Norm ABNT 10151 must be followed. To date, the airport noise perception of affected residents is not included as a parameter for environmental impact assessment. Since March 2009 the neighborhoods communities annoyed by the landing and takeoff noises from Santos Dumont Airport have been insisting actively joint to environmental control state institutions to solve the problem that was becoming worse due to the expansion of airport operations. At that same time this author began the implementation of noise annoyance social research on Santos Dumont Airport neighborhoods. In the first stage of the work interviews were conducted in about 70 different addresses distributed on five distinct districts as a purposeful sample of residents contacted through residents associations. The interviews were conducted by undergraduate students trained by this first author throughout 40 hours lessons course for developing field research skills. A carefully elaborated questionnaire applied during interviews and the data collecting methods are described in this paper. At the second stage of this work noise measurements at selected points will be carried out according to the noise annoyance social survey collected data results tabulation and analysis aiming to configure a complete social-acoustic survey in the near future.
(1) Department of Mechanical and Aerospace Engineering, Seoul National University, Korea (2) The Institute of Advanced Aerospace Technology, Department of Mechanical and Aerospace Engineering, Seoul National University, Korea
ABSTRACT
Noise prediction during a flight is one of the important research themes in a helicopter as it radiates a higher noise level toward a ground. There are several attempts to predict the noise level with various attenuation effects. However, since ray's path and magnitude radiating from the helicopter are changed due to a temperature profile and a wind speed profile in a refracting atmosphere, the propagation model has to contain not just air absorption but also effects of the atmospheric stability. In this paper, effective sound speed profiles are calculated according to Monin-Obukhov length, an atmospheric stability, and it is used for the prediction of ray path. There are also reflected wave and diffracted wave considerations in the model. Noise sources from the helicopter are built by HeliPA code developed in AeroAcoustics and Noise Control Laboratory (AANCL). The noise sources from HeliPA are applied to the propagation model as input data and EPNdB can be obtained from the predicted SPL results. Moreover, noise levels near a practical airport region are simulated with GIS terrain profile.
ISVR, University of Southampton, Southampton, UK
ABSTRACT
Reducing aircraft noise is critical to the growth of air transport and for quality of people's life. The aircraft noise is composed of contributions from various source mechanisms and fan noise is one of the dominant components at take-off and landing for aircraft with modern high bypass ratio turbofan engines. Fan noise is generated at the fan, propagates through engine intake and bypass duct and is radiated to the outside. Acoustic liners are applied on the internal walls to attenuate the fan noise while it propagates through the engine ducts. Typical engine duct liners are either so-called single degree of freedom (SDOF) or double degree of freedom (DDOF) liners. SDOF liners consists of a porous facing sheet backed by a single layer of cellular separator such as honeycomb cells with solid backing plate, and in case of DDOF liners two cellular layers are separated by porous septum sheet. The acoustic performance of such liners is strongly dependent on the depth of the cell(s). Generally these liners are selected to be most effective to reduce community noise measured with EPNL(dB) and typical liner cell depth is around 1 to 2 inches. In order to increase the attenuation by the liners at lower frequencies the cell depth must be made larger, which is often prohibited by the mechanical design constraints. One remedy can be the acoustic liners having L-shaped geometry so that it can fit in a shallower space.
In this study a potential of folded cavity liners is investigated. Such liners have the potential to behave like a mixture of deep and shallow liners. They have more complex frequency characteristics due to the fold compared to conventional liners and can be used to reduce noise over wider frequency range. Finite element models are used to assess the acoustic performance of liners. Parametric studies are performed and the noise reduction capability when installed in an engine intake is demonstrated.
Airport Environment Improvement Foundation, Japan
ABSTRACT
When calculating noise contours, we usually take account of only fly-over noise after the start of take-off roll or before the end of engine reversal after landing. In the end of 2008 in Japan, however, the national noise guideline "Environmental Quality Guidelines for Aircraft Noise" was revised to use Lden as noise index instead of WECPNL. The revised guideline also requires taking account of noise contributions of aircraft ground operations such as taxi, use of APU and engine run-up on the apron when impact of such noise sources is expected important. In such cases, soundproofing embankment is sometimes constructed to obstruct over-ground sound transmission outside the airport. This paper discusses the way of airport noise modeling which takes account of soundproofing embankment and noise exposure due to aircraft ground operations.
(1) Aviation Environment Research Center AEIF (2) Defense Facilities Environment Improvement Association (3) Osaka University, Osaka, Japan
ABSTRACT
This paper describes a result of preliminary survey toward the co-existence of the airport and local communities. The relationship between the airport and local communities has been improved by the advancement of noise mitigation at source and by the implementation of countermeasures such as soundproofing of houses around the airport. Nevertheless, impact of aircraft noise still continues to be a matter of serious concern for residents in the neighborhood of the airport due to the continuing increase in aircraft movements. An influential solution we believe is that the airport must grow into an invaluable resource to local communities, out of a subject of complaints, i.e., one of NIMBY facilities. To search for the coexistence of the airport with local communities, we performed a trial perspective survey about compatibility between life convenience and environment protection, using two kinds of method questionnaire survey and Picture-Frustration (P-F) study. Respondents were university students. The result of the questionnaire survey suggests that they placed priority on avoiding the negative burden for their life environment and they may have some hesitation in accepting inconvenience and/or few opportunities of employment in country life, although they have no objection to the preservation of the natural environment. The result of PF study also suggests that respondents tend to have a negative attitude toward living in the country without convenience, at the same time they tend to clearly go negative against obvious noise damage. We can conclude that they are strongly aware of the importance of the nature environment, but at the same time they wish that life convenience would be compatible with the nature environment.
URS New Zealand Ltd., Christchurch, New Zealand
ABSTRACT
New Zealand Standard NZS 6808 provides methods for the prediction, measurement, and assessment of sound from wind turbines. The 1998 version was written prior to significant wind farm development in New Zealand, and while the basic methodology proved robust, experience and research over the following decade brought to light numerous refinements and enhancements which are now addressed in the new 2010 version. This paper describes the revision process, and explores the technical issues addressed and key areas of debate. This was a challenging project, with wide ranging views both within the committee and from hundreds of public submissions.
(1) AECOM, Adelaide, SA, Australia (2) Cyclopic Energy, Adelaide, SA, Australia
ABSTRACT
Wind induced noise is a problem that can affect outdoor acoustic measurements. This issue is particularly relevant in the context of wind farm assessments, where the dependency of ambient noise as a function of the local wind speed is of primary importance when determining the noise criteria for surrounding residences. This paper is a continuation of work presented at the 2008 Australian Acoustical Society conference, which examined the factors that alter the wind generated microphone noise using wind tunnel measurements. This paper presents the results of atmospheric meas-urements of wind generated microphone noise, and provides a relationship between wind speed and microphone gen-erated LAeq noise level for a range of wind shields. A method for predicting LA90 noise levels due to wind generated microphone noise is also provided and the results compared to those noise levels predicted using the average wind speeds. Field measurements were necessary to determine the relationship between wind speed and microphone noise as it was previously determined that incident wind turbulence alters the induced aerodynamic noise levels, such that wind tunnel measured noise levels were not able to be applied to environmental noise measurements.
University of Perugia, Italy
ABSTRACT
Noise is the main factor which contributes to environmental pressure produced by wind farms. People living nearby wind farm often complain annoyance which is proven to be given both by noise and visual impact. In the hypothesis that annoyance is tolerable when law limits are respected, noise levels may be kept under such a limits by a changing the configuration of fan parameters with respect to wind velocity and direction. However changing on fan parameters determines a reduction on the produced electrical power. In this paper a wind farm managing plan has been proposed which allows to respect noise limits in any nearby human settlement; the plan has been determined by correlating: wind speed statistics, fan noise emission characteristics, fan configuration parameters and propagation equations. Adoption of the proposed managing plan determines a reduction on the produced electricity and on earnings; thus a comparison between environmental benefits and economical losses is finally proposed
ISPRA - Institute for the Environmental Protection and Research, Rome, Italy
ABSTRACT
The in force Italian legislation does not provide specific rules concerning noise produced by turbines in wind farms; such lack entails uncertainties in the adopting procedures, particularly about measuring standards. For example, on the basis of Italian rules, noise measurements should be performed at a wind speed lower than 5 m/s at microphone, while wind turbines give the highest acoustic impact above that wind speed value. Moreover, it is possible to have criticalities between measured noise levels in wind farms and limit values fixed for the zones hosting such plants, which often coincide with highly protected areas with noise limits very low.
Given the need for a clear interpretation of existing legislation and pending a future revision, Ispra -Institute for the Environmental Protection and Research - has launched a project for defining standards of law for that specific source. This activity has begun with a campaign of measurements made to characterize the level of environmental noise and the specificity of the source, in order to assess its impact.
Aurecon, Sydney, NSW, Australia
ABSTRACT
Wind farms are an important part of the renewable energy strategy; however with the developments predominantly occurring in rural areas with low background noise levels, they can significantly alter the existing noise environment creating considerable impacts for the affected sensitive receivers. The South Australian EPA "Wind farm environ-mental noise guidelines" and New Zealand Standard NZS 6808 "Acoustics - Wind farm noise" are the predominant environmental noise assessment methods employed in Australia and New Zealand. Both of these documents have un-dergone recent revisions along with the introduction of Australian Standard AS 4959 "Acoustics - Measurement, prediction and assessment of noise from wind turbine generators". This paper investigates and assesses the recent changes in methods with a particular focus on addressing the effect of atmospheric stability on the developed noise criteria.
(1) School of Mechanical Engineering, Pusan National University, Korea (2) Fluid Flow/Acoustics & Vibration Group, Division of Physical Metrology, Korea Research Institute of Standards and Science, Daejeon, Korea
ABSTRACT
In this paper, the aerodynamic noise sources of upwind horizontal-axis wind turbines are experimentally and theoretically
investigated. First, dominant noise sources on the rotor plane of wind turbines are localized by using the beamforming
techniques. These visualized acoustic fields reveal the dominant source locations on the wind turbine. Then,
theoretical predictions for identifying the dominant source locations are made by using the empirical noise prediction
model of Brooks et al. (1989) for the airfoil self noise. Through the comparison of the predicted results with the experimental
data, it is shown that predictions using the formula for laminar boundary layer vortex shedding (LBLVS)
noise do not match the measurements, which urges the need for improving its present empirical prediction formula.
Science and Assessment Division, Environment Protection Authority, Adelaide, SA, Australia
ABSTRACT
Control of noise impact from proposed and existing wind farms becomes a priority in assessing environmental impact from the wind turbines in many cases. In accordance with regulatory procedures the noise limits are to be met statistically. It involves noise monitoring at the relevant receivers to collect sufficient amount of data at wind speeds of interest and consequent post- process comprising the data curve fitting or averaging of the data in the bin at the particular wind speed. When the total noise in the vicinity of the wind farm exceeds the regulatory limits the measured level is corrected for background noise to calculate the wind farm noise. Generally the background data acquisition is performed before the wind farm construction and the result of the logarithmic subtraction can be doubtful. For example, at some wind speeds the background levels can be higher than the total noise. Change in the background noise or just peculiarities of the data post- process may create difficulties in implementing the correction for background method.
The paper suggests performing statistical analysis of the total noise and background data gathered at the particular wind speed on the basis of arbitrary combination of the measured levels and their probability analysis. Process of the data rectification is considered to eliminate the improbable combination of the measured parameters. The method enables calculating more realistic wind farm noise magnitude and is statistically more viable.
National Acoustic Laboratories, Sydney, NSW, Australia
ABSTRACT
Maximum allowable noise exposure levels are well established for the workplace. For example, Australian occupational health and safety regulations mandate a maximum allowable daily workplace noise exposure level (LAeq,8h) of 85 dB (INCE: 1997). However, a person's day extends beyond the 8 hours spent at work, and thus noise exposure during non-working hours (leisure time) also contributes to a person's overall noise exposure. To investigate the levels of noise experienced during leisure activities, a long term study is under way to document noise levels of a wide range of leisure activities. Measurements are being undertaken in 7 main categories: attendance at entertainment venues, attendance at sports venues, active recreation and sport, arts and cultural activities, travel, domestic activities, and other activities. In conjunction with these measurements of individual activities, a group of participants has been recruited to measure their personal noise exposure levels over a 4-day period, including 2 work days and 2 weekend (leisure) days. The data collected thus far reveals that, while many leisure activities are below the allowable noise levels and are thus safe', there are other leisure activities which, if engaged in regularly over a long period of time, have the potential to shift a person's noise exposure beyond allowable limits and thus increase the risk of acquiring a hearing loss at a relatively early age.
Leibniz-Research Centre for Working Environment and Human Factors, Austria
ABSTRACT
This study focused on the effects of noise on sleep at night and at day and on after-effects i.e. sleepiness and impaired performance during a following 8-h work shift. Forty-eight persons (23 male, 25 female; 19-30 yrs) slept in a balanced order four consecutive nights (2300-0700 h) and four consecutive days (1400-2200 h) in the laboratory and performed thereafter 8-h work shifts in the morning or in the night respectively. Sleep was recorded polysomnographically to derive the Sleep Efficiency Index (SEI) and the Sleep Disturbance Index (SDI). Subjective sleep quality (SSQ) and sleepiness were estimated after awakening. Sleepiness was rated hourly during the work shifts, followed by four performance tests (Go/Nogo, Divided Attention, Working Memory, Psychomotor Vigilance). The first and another randomly chosen sleep period were noise-free. During the six other sleep periods the participants were exposed to railway and to road traffic noise with equivalent noise levels between 41 and 56 dBA.
The physiological sleep parameters SEI and SDI indicated worse sleep during days than during nights but were not additionally affected by noise. SSQ was the only variable where noise and shift type showed an interaction. Concerning sleep in quiet SSQ was rated worse after day than after night sleep but was similar after day and after night sleep with noise exposure meaning that noise has a greater impact night sleep than on day sleep. Soon after awakening sleepiness was, probably due to sleep inertia, rated significantly higher after night than after day sleep and rated higher after noise exposure during sleep. Sleepiness increased during the second half of the night shifts and was after sleep in noise throughout the work shifts higher than after sleep in quiet. Performance decrements during night shifts consisted mainly of an increase of errors/missings where noise exposure during sleep was followed by prolonged reaction times. Overall noise exposure during sleep caused irrespective of the temporal location of sleep similar effects on physiological sleep parameters, on sleepiness and on performance thereafter. However, as day sleep was worse than night sleep and as the related after-effects were then stronger the effects of noise are regarded as relatively stronger. In the real life situation noise effects are expected to be much stronger as the equivalent noise levels are by about 8 to 15 dBA higher during the day than during the night.
(1) ARPAT - Tuscany Environmental Protection Agency, Florence, Italy (2) CNR IDASC IA "O.M. Corbino" (National Research Centre - Institute for Acoustics and sensoristic Science) Rome, Italy
ABSTRACT
The availability of strategic noise mapping at European scale allows comparing people exposure of each country, where unfortunately different methods were used. This paper will focus on uncertainty sources that arise in exposure estimation depending on method used to calculate noise levels at receivers and to assign level to the buildings and population to each building.
Results show that inaccurate estimations can lead to wrong action plans (wrong found destinations) and it can also affect results of epidemiological studies: therefore, the choice of the method should consider the aim of the study before assigning levels. A simple method, considering only the maximum level at the façade, is not suitable for epidemiological studies or action plans. Before managing calculation, it's necessary to establish noise scoring indicators and choose the most accurate method to determine it: in fact, noise scoring based upon highly annoyed curves (and curves it selves) can vary according methods.
Kyoto University, Japan
ABSTRACT
Existing community noise indices are mostly proposed to predict community response, especially annoyance response, based on the information of sound level being correlated with the response. As to night noise, indices such as LAE, LAmax and Lnight have been used to predict sleep disturbance due to noise as well as adverse health effects probably caused by the sleep disturbance without firm evidence for using these indices. This paper proposes a noise index based on neuro-physiological facts of awakening process. Recent neuro-physiology has revealed that wakefulness and sleep are dominated by the nuclei in the brainstem where the potential causing awakening is considered to be integrated. From this evidence, a night noise index, Nawake,year, was derived on the basis of integration of awakening potential. The potential as a function of sound level was estimated from the existing dose-response relationship between LAE and the percentage of awakening due to a single noise event. The index gives the total number of awakenings per year, and is robust for a wide variety of the number of noise events during night time. Simulated calculation of awakening due to night noise showed that the index, Nawake,year, had a sufficiently linear relationship with the number of night-time awakenings, while Lnight brought about remarkable variation in the relationship. Some examples of the application of Nawake,year are presented on the basis of the sound level measurements of traffic noise.
Ray W. Herrick Laboratories, School of Mechanical Engineering, Purdue University, West Lafayette, IN, USA.
ABSTRACT
Most models that predict the effect of aircraft noise on sleep relate the percent awakened to the indoor noise level of the event as measured using either LAmax or SEL(A). However, results from laboratory and field studies indicate that nighttime noise events do not only increase the number of awakenings but also changes an individual's sleep structure. The duration of awakenings increases with noise level and there is a reduction in slow wave and rapid eye movement sleep. These changes may cause next day effects such as decreased performance and increased sleepiness as well as long-term health problems such as hypertension. Therefore, in order to predict the effect that noise-induced sleep disturbance has on health more sophisticated models of sleep disturbance may be needed. Markov and nonlinear dynamic models have been developed to predict changes in sleep structure during the night. The nonlinear dynamic models predict non-noise disturbed sleep. A discussion of whether these nonlinear models could be be used to predict sleep disturbance due to aircraft noise is provided.
(1) Kwansei Gakuin University, Japan (2) Kyoto University, Japan
ABSTRACT
In the present study, the economic value of sleep disturbance due to traffic noise was examined by means of the contingent valuation method (CVM). In 2009, we conducted a questionnaire study in an area in Urayasu, Chiba, Japan; residents of this area are expected to be affected by the expansion of the Tokyo International Airport, which is scheduled for 2010. Residents were asked about their willingness to accept compensation (WTA) per month for once-a-month and once-a-week sleep disturbance due to traffic noise. Although open-ended CVM was employed to gather residents' opinions on WTA, two choices ("I do not need compensation because it does not bother me" and "I need more than money, and do not accept the disturbance") were also offered. Two versions of the questionnaire, one asking about aircraft noise and the other asking about road traffic noise, were prepared. Each version of the questionnaire was sent to 1,600 residents in the study area. As a result, 1,947 responses with a signature on the consent form were collected and the number of valid responses obtained was 1,829 (906: aircraft noise, 923: road traffic noise). It was found that respondents' WTA did not seem to differ depending on whether the disturbance was due to aircraft noise or road traffic noise. It was also found that there were great differences between individual WTAs for sleep disturbance due to traffic noise. The median value of the WTA for once-a-month sleep disturbance was 26,000 JPY/month (25-75th percentile: 3,000-"More than money"). The median value of the WTA for once-a-week sleep disturbance was "More than money" (25-75th percentile: 15,000-"More than money"). Furthermore, it was revealed that respondents' WTA varied significantly according to their basic attributes, such as age, socio-economic status, and subjective noise sensitivity.
National Acoustic Laboratories, Sydney, Australia
ABSTRACT
Most, if not all, industrialised countries now have in place occupational health and safety regulations with regard to hazardous noise exposure in the workplace. The challenge now is how to account for exposure to non-work and leisure noise and how to determine if this is a potential problem replacing or in addition to workplace noise. Recent studies undertaken by the National Acoustic Laboratories indicate that non-work and leisure noise exposure can have a significant effect compared to workplace noise thus compounding the problem of maintaining good hearing health. These studies have also provided a straight forward method for the comparison of the overall effects of all noise exposure using a noise exposure profile.
Ergonomics Laboratory, University of Minho, Guimaraes, Portugal
ABSTRACT
Noise is widely recognized as one of the most important risk factors in occupational environments, in particular in what concerns the risk of hearing loss development. However, noise exposure might cause also other important effects, namely at a cognitive level. Teaching activities with young students, due to its own nature, can be a very demanding job in what regards cognitive requirements. Considering this, this study aims at finding out the possible relationship between classroom noise exposure and teachers' cognitive performance. As this relationship will be analyzed from the cognitive impairment point of view, it is important to bear in mind also the individual noise sensitivity. Accordingly, this study also includes the application of the Weinstein's Noise Sensitivity Scale (WNS). The study sample includes 16 teachers, which were divided into 2 different groups, one related with practical teaching activities (P) and the other related with theoretical teaching activities. Subjects were also divided according to obtained WNS score of each of them, into a Noise Sensitive (NS) and a Non-Noise Sensitive (NNS) groups. Noise exposure was measured in all classrooms considered during four weeks, and the corresponding noise equivalent level was registered. In order to test and register teachers' cognitive performance, all the teachers performed a cognitive test, applied in a personal computer, during four weeks and in two different moments within the same day. The obtained results indicate that, in terms of noise exposure, the highest registered one-hour equivalent levels were 73.0 and 84.3 dB(A), for the P and T groups respectively. The results from the cognitive performance tests show that the P group had a better performance than the TP. However, both groups showed a decrease in their performance after being exposed to classroom noise. When analyzing performance in both noise sensitivity groups, it is possible to notice that the NNS group had a better performance, but both groups showed also a decrease in their cognitive performance under the same exposure circumstances. The results showed that there is a statistical significant relationship between noise exposure and cognitive performance for the considered teachers, although this may not occur in all the analysed scenarios. Finally, it is important to mention that these results show the need to consider noise exposure risk in cognitive demanding jobs, such as a teacher job.
Federal Institute for Occupational Safety and Health (BAuA), Dortmund, Germany
ABSTRACT
Music means pleasure and passion to both consumers and performers. However there are potential risks of hearing damage for those workers in the music and entertainment sector who are repeatedly exposed to loud music over years of their working life. According to this the European Directive 2003/10/EC on occupational noise refers to all workers expressly including those in the music and entertainment industry. In addition to the national conversion by the European member states the directive required to provide national guidelines to support the practical implementation. The German guideline was published by the BAuA and developed within a working group including different professional associations and social partners.
The crux of the matter in this sector is that sound is an absolutely intended and essential feature but it may be harmful at the same time. Nevertheless, the fundamental principles of noise control e.g. the general obligation for noise reduction at source or the priority of collective protection measures over individual protection measures are implemented in the directive. Where noise exposure exceeds action levels further measures have to be applied: implementation of noise reduction programmes, marking of noisy work places, use of hearing protection and health surveillance. Moreover a limit of 87 dB(A) for the noise exposure level, taking into account the attenuation of hearing protection, shall be complied. The approved way of noise control corresponding to these regulations is noise reduction at source, on the transmission path, by organizational measures and the application of hearing protection. But this procedure appears to be a challenge in music and entertainment. This contribution covers the sectors of orchestra musicians and workers in music clubs. The sound exposure of these employees and options for exposure limitation are described, in particular with regard to technical measures. Options for noise control directly at the source are often limited and have to follow the audiences' expectations. Thus measures on the transmission path from the sound sources to the individual workers get relevant. Principles as suitable setups, distance to sound sources, screenings, suitable absorption and room acoustics should be considered in both cases an ensemble or a music club. The fundamental goal is to protect workers but guide the sound to the audience. Nevertheless, there exists no general solution, but often only a combination of several individually adapted measures can yield an applicable exposure control.
Acoustics and Protection of the Soldier Group, French German Research Institute of Saint-Louis (ISL), Saint-Louis, France
ABSTRACT
The European Regulation 2003/10/EC, voted by the European Parliament in 2003, is implemented in most of the countries of the European Community since 2008. This regulation defines different actions to be taken by the employer, when the employees are submitted to continuous or impulse noise which exceeds the lower or upper exposure action value. It also defines maximum exposure levels to which employees can be exposed. For continuous noise, these levels are given as daily noise exposure levels (Lex,8h), for impulse noise only the peak pressure level is relevant. The actions to be taken are: - at the lower action level (Lex,8h ≥ 80 dB(A) or Lpeak ≥ 135 dB(C)) hearing protectors have to be made available to the worker and - at the upper action value (Lex,8h ≥ 85 dB(A) or 137 dB(C)) the hearing protectors have to be used. The exposure limit values (including the hearing protection) are Lex,8h = 137 dB(A) for continuous and Lpeak = 140 dB(C) for impulse noise. As the noise environment to which the soldiers are exposed are often exceeding these values, it is important to analyze the impact of this regulation on the efficiency in training and/or combat. The presentation will present the principal types of noise to which the soldiers are exposed. The exposure criteria which are used for weapon noise in different countries will be discussed and compared to the European regulation.
Department of Applied Physics, College of Sciences, University of Sharjah, United Arab Emirates
ABSTRACT
The aim of this study is to investigate the effect of the noise levels encountered in dental clinics located in various United Arab Emirate (UAE) cities, including professionals working in these clinics and patients visiting the clinics. Out of the one thousand surveys that were distributed, we have collected 860. Six hundred and twenty three (623) completed surveys were collected from patients and 137 from dental professionals from 27 dental clinics located in various UAE cities. For dental professionals questions focused on examining effects of noise encountered in the clinic on their hearing and interference of noise with their communication with patients and dental assistances. For the patients, the questions focused on the effect of noise patients' decision to visit the dental clinic and undergo subsequent or follow up treatment. In addition, for both dental professionals and patients, surveys included questions on annoyance level using a scale from 1 (not at all annoyed) to 5 (extremely annoyed). The results showed that 17% of the dental professionals reported hearing related problems since they have joined the clinic, and 32% among them reported experiencing communication problems with their patients because of noise. On the 1-5 annoyance level scale, 29% of the dental professionals felt "extremely annoyed" by the noise in the dental clinic, while only 3% felt "not at all annoyed". For patients, and on the most annoying experience during their visit, the sound of the drill came first with 47% of the patients felt "extremely annoyed" with it. On the effect of noise on their decision to come back for follow up treatment, 35% (adults) and 53% (children 10-14y) reported that it plays a role in their decision. This came second only to waiting time. Gender and age gabs, as well as other survey results will be presented.
In conclusion, noise levels in UAE dental clinics, which were found to reach values up to 94 dB(A) for compressed air blasts and 91 dB(A) for some cutting activities, steam cleaning, and sandblasting, appears to have an effect on UAE dental professionals as well as patients visiting these clinics. Even though these levels are below the limit of risk of hearing loss, extended exposure may become a real risk if proper ear protection is not considered. For patients, the noise appears to be a determining factor on their decision to undergo dental treatment. Recommendations on how to deal with this will be discussed.
EMC, UFSC - Universidade Federal de Santa Catarina, Florianopolis, Brazil
ABSTRACT
In many industrial and military situations it is not practical or economical to reduce the noise to levels that do not present either a hazard to hearing or annoyance. In these situations, personal hearing protection devices are capable of reducing the noise by up to around 40 dB. Although the use of a hearing protector is recommended as a temporary solution until action is taken to control the noise, in practice, it ends up as a permanent solution in most cases. Therefore, hearing protectors must be both efficient in noise attenuation and comfortable to wear. Comfort in this case is related to the acceptance of the user to wear the hearing protector consistently and correctly at all times. The purpose of this paper is to review publications related to earmuff comfort, most of which are based on measurement of the total headband force and subjective evaluation using questionnaires. Most of the results of blushed results show a week correlation between total headband force and subjective evaluation.This paper presents new quantitative indices based on the comfort parameters, mainly measurements of the contact pressure distribution between the earmuff cushions and circumaural flesh of the human head. The comfort parameters were investigated and equations developed to calculate comfort indices and overall quality indices. The calculated indices are correlated with subjective evaluations. Measurement results for the pressure distribution of ten earmuffs, show good correlation with subjective evaluation.
(1) Hearing Research Laboratory, Noise and Communication Research Unit, University of Ottawa, Ottawa, ON, Canada (2) Noise Pollution Clearinghouse, Montpelier, VT, USA.
ABSTRACT
Recent studies on the use of portable audio listening devices indicate that while sustained listening at the maximum output volume of the unit is potentially hazardous with most devices, personal habits and listening preferences during actual use are such that only a 5-10% fraction of users may be at high risk of developing permanent hearing damage. Given the explosive increase in sales for these devices in recent years, this nevertheless represents a very large number of individuals worldwide. In this project, self-administered auditory temporary threshold shift (TTS) measurements are investigated as a possible tool to raise user awareness on the potential risks of portable listening devices when used at excessive levels. Users are presented with a sequence of 10 tones varying in levels near threshold using their device, and are asked to count the number of tones heard before and after the listening session. Counting less tones after the exposure indicates the presence of a TTS. Test-retest reliability measurements indicated that a TTS of 5 dB or more could be detected with this method. A validation study is currently being carried out in a laboratory setting with a group of users with normal hearing. Subjects must listen to music with their own devices for a one-hour session in a simulated bus noise environment. Thresholds are measured prior to and after exposure using the proposed method of counting and a fixed-frequency adaptive tracking method as control. Listening levels are also monitored using a KEMAR manikin. Preliminary results indicate that a fraction of subjects develop a TTS of 5 dB or more, which is typically shown with both threshold methods. Interestingly, these subjects all listened to music at levels exceeding 90 dBA during the session. It is hoped that such a tool could help users self-detect potentially hazardous situations and foster safer listening practices.
(1) Central Institute for Labour Protection - National Research Institute, Warszawa, Poland (2) The Fryderyk Chopin University of Music, Warszawa, Poland
ABSTRACT
Exposure of musicians to sound was measured during rehearsals and at a concert of a wind symphony orchestra. The measurements were made with the use of eight microphones in various locations on stage and averaged over the duration of 10 musical pieces. It was found that the LAeq values ranged from 83.0 to 106.5 dB. The daily noise exposure levels LEX,8h, corresponding to the measured levels exceeded 85 dBA in 8, 5, and 6 microphone positions during a rehearsal, the dress rehearsal and at the concert, respectively. Peak C-weighted SPLs ranged from 119.0 to 132.1 dB. These results suggest that musicians are exposed to sound levels that are hazardous to hearing. To study the effect of sound exposure of musicians on hearing an experiment was conducted in which subjects were exposed in the laboratory to recordings of sound that replicated the conditions of exposure on stage. In five subjects, TTS measured after a 40-minute exposure to recordings of seven pieces reached 15 dB. It also was found that TTS could be prevented by using ER 20 Musicians earplugs. At the next stage of the study the feasibility of using earplugs during musical performances was examined. Seven soloists and three music assembles performed four pieces of music with custom moulded musician's earplugs, with acoustic filters designed to attenuate sound by 9, 15 or 25 dB. Results showed that the use of earplugs had a pronounced effect on the levels and spectra of played sounds. The effect of wearing earplugs was largest for brass players; the change in 1/3-octave-band levels exceeded 15 dB at high frequencies when musicians donned the earplugs. The levels of sounds played without and with earplugs differed by about 5 dB. It also was found that the changes in level and spectrum of sounds increased with earplug sound attenuation. In the case of woodwind instruments the effect of wearing earplugs was smaller than observed for brass instruments. The changes in 1/3-octave-band levels did not exceed 5 dB and the overall level differed by not more than 2 dB. All performances made with and without earplugs were recorded and judged for quality by six experts. The judgments demonstrated that the use of earplugs deteriorates the quality of performance. The influence of wearing earplugs on performance quality may probably be reduced by training.
Audiology and SLP Program, University of Ottawa, Canada
ABSTRACT
Warning sound devices are commonly used in noisy workplaces to warn workers of potentially dangerous situations. Warning sound perception depends on many factors, including warning sound levels relative to the background noise, hearing protection and hearing status. Although national and international standards (i.e. ISO 7731) are available to guide the choice of warning sound devices, none appears to take into account all these factors within a comprehensive model. A software tool, Detectsound, was used to demonstrate the extent to which hearing protection can compromise the perception of warning sounds by workers with hearing loss. Detectsound yields desired target sound levels at different workstations for different workers using and for various conditions of hearing protection. Scenarios were constructed using a low-frequency noise spectrum from NIOSH database, different degrees of sensorineural hearing losses, and personal hearing protector attenuation measurements or estimates according to the manufacturer's data. Detailed analysis of realistic scenarios with Detectsound revealed that a flat and high frequency sensorineural hearing loss combined with hearing protection can compromise high frequency perception and lead to overprotection. Such realistic scenarios make it explicit that the configuration of warning devices can vary significantly depending on the hearing status of workers at a given workstation and the variability in attenuation provided by hearing protectors.
Department of Occupational and Environmenal Medicine, Sweeden
ABSTRACT
The aim was to study hearing loss in a population of aircraft technicians and mechanics and indentify predictors. Equivalent noise levels during a working day were measured and were 70-91 dB (A). Maximal noise level was 119 dB(A). A total of 336 aircraft maintenance personnel answered a self-administered work environment questionnaire (response rate 76%) and underwent audiometric test. The mean values for the hearing test at 3, 4, 6 kHz were used for the ear with the most hearing loss and was compared with a Swedish population data base of persons not occupationally exposed to noise. At younger ages (-40 y) aircraft technicians and mechanics had more hearing loss compared to the reference group. Through multiple logistic regression analyses associations were found between age and hearing loss, and between exposure to solvents and annoyance due to hearing loss. In conclusion, aircraft technicians and mechanics may be exposed to equivalent noise levels above the Swedish occupational standard and have a higher age matched hearing threshold level at younger age compared to a reference group.
Applied Acoustics and Instruments Research Group (I2A2). Universidad Politécnica de Madrid. INSIA - Campus Sur UPM. Ctra. Valencia Km. 7. 28031 - Madrid, Spain
ABSTRACT
This paper aims to establish a relationship between subjects exposed to military aircraft noise and their cognitive skills. The sample is composed of 65 subjects, divided into pilots and maintenance staff. Data were collected between 2006 and 2009. FAA and ISO regulations were considered for the study of noise; whereas for psychoacoustic evaluation, the technique of survey was applied. Results show that low frequency noise ranging between 16 and 250 Hz is higher than 100 dB in laboring areas during all operational phases of aircrafts, being the take-off the phase with the highest level of pressure. Ground operations reach 125 dB. Meaningful changes regarding concentration and memory were not observed. As a conclusion, it is presumed that the brain transforms noise into electrical signs, modifying Betha and Delta brain waves, which generate irritability, fatigue, discomfort, and sleepiness.
AGH UST University of Science and Technology, Cracow, Poland
ABSTRACT
Sound fields in industrial workrooms can be predicted well using numerical methods. Prediction models can be used in helping to predict the benefits of and to optimize control measures. Two main factors influence the sound propagation in workrooms - the boundary conditions of the room and the fittings in the room. These factors should be accounted for in prediction models. Prediction models are employed to predict the sound fields in the measured configurations. To investigate the propagation of the sound in real workrooms, experiments were performed. The noise, of an omnidirectional sound source, was measured in many points of the room space simultaneously using a multi channel signal acquisition system. This allowed the comparison of simulated results with the ones measured in real rooms.
Central Institute for Labour Protection, National Research Institute, Warsaw, Poland
ABSTRACT
For the assessment of noise emitted by machines, a global index of machines has been elaborated. The global index is a function of the following partial indices: sound power index, index of distance between the workstation and the machine, radiation directivity index, impulse and impact noise index and noise spectrum index. Each partial index always adopts positive value. If the value of the global index does not exceed 1, the noise of the assessed machine will not exceed the admissible value of the A-weighted sound pressure level at the workstation.
Simulation tests of the partial indices as well as the global index were carried out. The results has proved among other things that the value of the global index increases both with the increase of sound power and the decrease of distance between workstation and a machine. The correctness of the results of the simulation tests was confirmed by the results of the experimental tests. The experimental tests were carried out in order to determine values of the global index for a group of engine-generators with the use of inversion method allowing for the determination of sound power level. It required the modelling of each of the tested generators with one omnidirectional substitute source. The correctness of the determined values of indices was confirmed by the results of A-weighted sound pressure level measurements on hypothetically assumed workstations in simulated "in situ" conditions.
University of Auckland, Auckland, New Zealand
ABSTRACT
Noise induced hearing loss is widespread and debilitating, yet there has been little research of relevance to New Zealand in the present. We aimed to measure noise levels associated with processes and equipment in the metal manufacturing sector, to assess daily and lifetime noise exposures for individuals, to measure hearing, and to determine the current and lifetime use of hearing protection equipment. 27 metal manufacturing companies took part. Noise levels associated with equipment and processes were measured and employees were interviewed during a shift. Hearing tests (otoscopy, tympanometry, and pure-tone audiometry) were carried out and dosimeters were fitted to 160 employees before the shift began on a second day. Many processes produced sound levels above the legally-specified safe limits of 85 dB(A) Leq and 130 dB(C) Lpeak, and approximately half of production workers were exposed to more than 85 dB(A) Leq over an eight-hour shift. Some of the noises, particularly impact noises, may be avoidable without loss of efficiency by altering processes. Hearing protection equipment was widely worn, and there was little evidence of widespread noise induced hearing loss in younger workers. Older workers (>40 years) reported not having worn hearing protection during noise exposure earlier in their lives, and had hearing losses consistent with noise induced loss, though the influence of age could not be ruled out. Disposable foam earplugs were often poorly fitted, to the extent that little or no protection was afforded by them, and poor fitting was correlated with the presence of hearing loss. Overall the findings suggest that noise levels are too high for hearing safety but that the wearing of hearing protection has been widespread over the last twenty years in this sector.
Acoustics Group, FESBE, London South Bank University, London, UK
ABSTRACT
With the introduction of Control of Noise at Work Regulations in 2006, entertainment noise was given a temporary exemption until 2008. Unfortunately classical music was caught by the legislation, even though it is the point of the activity rather than a side effect, as is the case for industrial noise. Since 2007, the Royal Academy of Music, as a leading conservatoire, has been working together with London South Bank University on developing all practical means of complying with the new regulations. The noise project', assisted by the full cooperation of the Academy management, administrators, professors and students, can be split into four separate challenges: educating the musicians (both students and teachers; assessing the aural history of the musicians (students and teachers) and monitoring any changes in terms of hearing loss; assessment of individuals noise exposure and identification of key instruments /ensembles /environments in the Academy that create the highest noise levels and development of mitigating solutions(architectural, teaching, novel solutions). The emphasis of the project was to only to use or apply culturally acceptable methods and solutions. This was to maintain the exceptional standards held by the Academy. This paper discusses the Royal Academy of Music noise project' and all steps taken so far towards both musicians' awareness and protection from excessive noise exposure, but also towards compliance with the new regulations.
(1) Central Institute for Labour Protection - National Research Institute, Warsaw, Poland (2) Institute of Radioelectronics, Warsaw Institute of Technology, Warsaw, Poland
ABSTRACT
Earmuff transmittance was measured on human subjects with the use of the microphone in real ear (MIRE) technique and on artificial test fixtures (ATFs). The purpose of the study was to determine hearing protector attenuation as a continuous function of frequency and to compare earmuff attenuation measured on ATFs with that measured on human subjects. Hearing protector attenuation is usually determined with the use of the real ear at threshold (REAT) method. According to ISO 4869-2 standard the measurements are made at seven one-octave steps from 125 Hz to 8 kHz, with the use of one-third-octave bands of noise. Whereas such data are considered sufficient in conditions of exposure to broadband industrial noise, a detailed frequency response of the hearing protector may be useful for the assessment of protection in the presence of tonal components and may be used to predict the impulse noise time waveform under the hearing protector. In this study measurements of earmuffs were conducted on five human subjects as well as with the use of an ATF made according to the ISO 4869-3:2007, a modified ATF with the 2 cm3 chamber, a Kemar manikin, and a Brüel&Kjaer 4100 manikin. The measurements were made in free-field conditions, in an anechoic chamber, using a maximum length sequence (MLS) test signal. It was found that earmuff frequency responses displayed substantial peaks and dips, exceeding 20 dB at certain frequencies. Such irregularities of the frequency response cannot be observed in standard octave-band measurements. Frequency responses determined on ATFs generally differed from those obtained on human subjects. This finding shows that the choice of ATF is crucial for obtaining an accurate representation of conditions in which hearing protectors are worn by real users.
Communication Acoustics, Dresden University Of Technology, Germany
ABSTRACT
One of the interesting questions in vehicle acoustics is how do we evaluate the instationary sounds. What are the relative contributions of the different sequences to the overall quality judgment, if the sounds have two characteristic time sequences?
Normal operation of a vehicle covers stationary and instationary sounds depending on driving condition. "Engine start" (instationary) and "Idle" (stationary) are two examples of these driving conditions which are coupled with each other functionally. Therefore we hear the sounds of these conditions sequentially. Once such a hearing experience is over, we can form an overall evaluation of it. This study deals with the rules behind the evaluation process. We examine the hypothesis stating that the experience over time is not a simple combination of its discrete components. The influences of the extreme part (feature-based), beginning part and the final part of the experience to the overall evaluation are investigated. In order to approach this aim in a systematic way, psychoacoustical experiments were conducted. In the first experiment, the binaurally recorded engine start and the idle sounds of 12 cars from different brands were presented to the nonexpert subjects. They were asked to describe what they like and dislike about engine start and idle sounds. Some of the verbal descriptors which were used by the subjects were selected for the further experiments. In these experiments, the engine start and idle sounds were evaluated separately and variously combined by the subjects using a quasi continues scale. The results of the experiments give some important hints for the experience summation of multiple events.
(1) Fiat Automóveis S.A., Brazil (2) Universidade Federal de Minas Gerais - UFMG, Brazil
ABSTRACT
In recent years sound quality has become a key issue for the Automotive Industry. As a consequence better testing procedures associated with improved analysis tools have been in constant development to enable car manufacturers to adequate their products to the costumers needs. The present study represents an initiative under current development by the Federal University of Minas Gerais in co-operation with FIAT Automoveis do Brasil. The approach makes use of a combination of both the so-called engineering parameters and classical psycho-acoustics parameters. Some of the initial results, composed of a combination of objective and subjective assessments are presented and organised in what is hoped to be a potential improvement and contribution for the acoustic evaluation of motor cars.
(1) Department of Electronic Measurement and Diagnostic Technology, Technische Universität Berlin, Germany (2) Development methods, Ingenieurgesellschaft Auto und Verkehr, Berlin, Germany
ABSTRACT
In Europe compression ignition (or Diesel) engines are outselling spark ignition (Gasoline) engines by numbers. To promote sales even further there exists a strong interest in enhancing the sound quality of Diesel engines to match their performance in fuel economy and CO2-emissions. Among many other projects the German Research Association for Combustion Engines, FVV, supports the project Noise controlled Diesel engine. Within this project we investigate how the engine management can make use of noise-related sensor signals to reduce and control acoustic emissions whilst maintaining the overall goals on fuel efficiency and regulations regarding (chemical) emissions. It is common understanding among experts that purely physical parameters such as the
overall sound power level are insufficient to quantify the acoustic effect on the environment. Rather psycho-acoustic parameters such as loudness or impulsiveness of the signal are required for that purpose. This is even more important if development effort should be guided towards quantifyable and noticeable benefit. To support this point and to demonstrate the approach measurements on a Diesel engine were conducted on a test bench with variation of input parameters previously identified of having an influence on the engine noise. The tests were planned by DoE with the following inputs: engine speed, engine load, injection timings, and injection durations. From the measured airborne engine sounds psychoacoustic parameters were calculated. From these the Diesel note was derived which was developed to quantify the acoustic impact on humans. The Diesel note was (indirectly) modeled in terms of the ECU parameters and optimized. This optimum shows the best possible sound that can be achieved within the valid ECU parameter combinations. The maximum potential can be shown by computing additionally the worst Diesel note. The approach will be further developed and validated in the future.
Centro de Investigación de Tecnología de Vehículos, Departamento de Ingeniería Mecánica y de Materiales, Universidad Politécnica de Valencia, Valencia, Spain
ABSTRACT
The influence of temperature and the associated gradient on the acoustic attenuation performance of automotive dissipative mufflers is studied in detail by a multidimensional analytical approach based on the mode matching method. To account for the variation of temperature within the absorbent material, a segmentation procedure is considered with a number of dissipative regions with different but axially uniform temperature. The technique is applied to dissipative reversing chamber mufflers, including the presence of an absorbent material. For validation purposes, the analytical predictions are compared with numerical calculations based on the finite element method, showing a good agreement. While the temperature does not modify the transmission loss of reactive mufflers if the ratio of the frequency to the speed of sound is considered as the abscissa, an influence is found for dissipative configurations, at least with the models of impedance and wavenumber currently available in the literature for absorbent materials. In addition, the effect of temperature gradients on the transmission loss of some selected configurations is studied.
Federal University of Santa Catarina, Brazil
ABSTRACT
Beamforming is an acoustic imaging technique that can estimate the radiation pattern of single or complex sound sources and produce a map of the results. The pass-by noise test is a standardized test that aims to evaluate the overall noise of vehicle's sideline.
By coupling the idea of pass-by test with the extension of beamforming technique to moving sources provides the access to the recognition of sound sources produced by vehicle movements, for example, rolling tyres, engines and exhaust systems.
The present paper aims to describe a low cost system to apply beamforming technique to pass-by noise test. The system is based on the use of low cost electret microphones mounted in a metallic array which are connected by a coaxial cable to the acquisition system.
Later in this document in the application section the results of beamforming maps of pass-by noise test can be viewed in more detail.
State Key Laboratory of Automotive Safety and Energy, Tsinghua University, Beijing, China
ABSTRACT
Brake vibration & noise is the friction induced phenomenon which has been studied by many researchers in a variety of ways due to its importance and complexity since the early 20th century. Recently rapid progress has been made in both theoretical and experimental studies, especially on the modeling and analytical methodology of solution to the practical problems. Present paper besides a general review of the literature concerned tries to make emphasize on the common points of different investigation works. The literature is including our own works since 1984 in Tsinghua University, but many of them are none published in English. For its presentation it takes some pages. The analysis made is from point of view for suppression of brake vibration & noise.
Graduate School of Science and Engineering, Yamaguchi University, Japan
ABSTRACT
Mufflers are employed in the exhaust pipe to reduce the exhaust noise from automobiles. Perforated plates are often placed inside the muffler to reduce noise and enhance the strength of the muffler. However, the flow passing through the perforated plate would impinge on another inner plate and generate flow-induced noise. This study experimentally researched characteristic of the flow-induced noise generated by air flow passing through perforated plates and impinging on a flat plate. The perforated plate was installed at the end of the pipe through which steady air flow was supplied from a blower. The air flow passing through the perforated plates impinged on the flat plate. The noise level was measured by a sound level meter in an anechoic room. We examined effects of the hole diameter, dimensionless hole spacing, and distance between the perforated plate and the flat plate with constant total hole area and constant flow velocity at the hole. The results show that the flow-induced noise decreased with the increase in the distance between the perforated plate and the flat plate at frequencies higher than 100 Hz but increased at lower than 100 Hz. The flow-induced noise at higher than 100Hz was less dependent on the hole diameter and dimensionless hole spacing.
Acoustics and Vibration Group, School of Engineering and Information Technology, UNSW@ADFA, Canberra, Australia
ABSTRACT
Disc brake squeal as a major source of customer dissatisfaction is known to be friction-induced due to the highly non-linear contact of the surfaces between the disc and the pads. Brake squeal remains fugitive and difficult to predict also to some of its squeal frequencies` have varying character and cannot always be associated with component modes. By means of structural finite element analysis, a simplified brake system in the form of a pin-on-disc is firstly approximated by a block sliding on a plate. By varying pressure and the friction coefficient, no mode coupling instability is observed and the mechanism extracted is purely of friction-induced nature. Especially in-plane pad motion in direction of and perpendicularly to the sliding direction seem to feed-in most of the energy. These modes and their variability due to pressure variation, changes of lining material`s elastic components and increased friction coefficient are studied in the following by means of the plate model. Then, it is shown, that these pad modes also exist for a pad-on-disc model with isotropic lining material. A second pad-on-plate model with more realistic lining material is developed which considers changes of elastic constants due to pressure variations. It is found, that changes in elastic properties of the lining material influence significantly the vibrations of the pad modes. The kinetic energy spectrum lifts up with changing pressure and stiffness and that combined effects of pressure synchronised with changing material properties are more severe than could be assumed by the complex eigenvalue method alone. By means of inverse Fourier transform of the response spectrum and non-linear time series analysis it is possible to detect the instability of the pad-on-plate model. The results show that friction-induced instabilities result from non-binding forces between pad and disc, with energy transfer from pad to disc causing dynamic instability, might trigger mode coupling or amplify underlying unstable modes predicted by the complex eigenvalue method. It is shown that these instabilities are most likely responsible for squeal frequencies occurring at frequencies far from the frequencies of the rotor modes, as often observed in brake squeal.
School of Engineering and Information Technology, Acoustics and Vibration Group, UNSW@ADFA, Canberra, ACT
ABSTRACT
Since the early 1920s, disc brake squeal has been an issue for the automobile industry due to dissatisfied customer's complaints and the accompanying warranty costs. Despite a good deal of progress having been made in predicting brake squeal propensity, not all mechanisms are known and brake squeal remains unpredictable and highly fugitive. In recent years, research has been focused on brake squeal due to the mode-coupling type of instability, leaving out the primary friction-induced mechanisms such as stick-slip. In this paper, the acoustic radiation of simplified brake systems, in the form of a pad rubbing on both a plate and disc, is investigated. The radiation efficiency and acoustic power are calculated using the acoustic boundary element method, specifically ESI`s Fast Multipole Solver (DFMM) implemented in VAOne. Results show that there exist some frequencies at which squeal occurs but which are predicted by the complex eigenvalue method. These frequencies do not correspond to the frequencies of the rotor modes and are here referred to as 'instantaneous' pad-modes causing a friction-induced instability. The frequencies of these instantaneous modes are dependent on the material properties of the pad and the contact conditions. Radiation efficiency due to pressure variations changes less, than due to friction coefficient variations. Further, it is shown, that pad-modes are acoustically relevant and especially active at lower pressures.
Department of Automotive Engineering, Tallinn University of Technology (TUT), Estonia
ABSTRACT
Today, catalytic converters are common parts of automotive engine exhaust systems. The primary purpose of using catalytic converters is to reduce the gaseous exhaust pollutants. Besides reducing harmful pollutants, these devices have a significant effect on the acoustical performance and the pressure drop of the engine exhaust system. A catalytic converter is known to have two distinct acoustic effects: the reactive effect originating from the acoustic wave reflections caused by cross-sectional area changes within the unit and the resistive effect which results in the acoustic wave dissipation caused by viscous losses. The pressure drop in the narrow tubes in the catalytic converter element results in frequency dependent resistive effects on the transmitted sound. In this paper the passive acoustic effect which treats the sound attenuation in the catalytic converters has been investigated. An experimental investigation on automotive catalytic converters with straight-flow design treated as acoustic two-ports is carried out. A novel test facility, recently set up in TUT for acoustic characterization of exhaust system elements in hot flow conditions is implemented to determine the sound transmission in the catalytic converters. The experimentally determined frequency dependent transmission loss together with the pressure drop data across the element is compared with simulated results for a number of varied mean flow conditions obtained by implementing a 1-D simulation software.
(1) Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden (2) Belgian Road Research Centre (BRRC), Brussels, Belgium (3) Technical University of Gdansk (TUG), Gdansk, Poland
ABSTRACT
It has been suggested recently that vehicles, driven in electric mode, either hybrid or pure electric vehicles, are so quiet that they constitute a safety hazard for pedestrians and bicyclists in traffic. It is claimed that such vehicles are not acoustically perceived due to the power unit being exchanged from a combustion engine to electric motors; something that essentially cuts away all power unit noise and leaves tyre/road noise, the latter of which is the same as for similar-sized vehicles with combustion engines. There are currently a number of fast and concerted actions by the US and Japanese governments as well as within international bodies such as UN/ECE and ISO, with the expected outcome that "minimum noise" of vehicles shall be measured with a standard method and legal limit values for such "minimum noise" shall be established. The paper present findings regarding possible traffic safety effects of quiet vehicles and concludes that only a US study has identified such effects. A critical review leads to the conclusion that this study may be biased and needs confirmation by further research. After reviewing data from noise measurements in Japan, the authors present own previously unpublished data on noise emission levels for road vehicles which may be considered as "quiet". Special concern is given to noise at speeds below 20 km/h where it is expected that the problem might be the worst and where previous data are missing. It is concluded that already a significant number of our present internal combustion engine vehicles are so quiet at low speeds that normally one cannot hear any difference between an electric and a normal vehicle in an urban area. Tyre/road noise is the dominating noise in most cases where a light vehicle is driven at speeds at or above 15-20 km/h (heavy accelerations are the exceptions), and this is the same whether the vehicle is electric or not. Thus, it is a property of our vehicle fleet which we have had for more than a decade, and few have considered that as a safety problem. Therefore, there is not enough justification for equipping our future quiet vehicles with extra artificial noise or warning sounds. If needed at all, there are better options which are non-acoustical.
(1) Automotive Parts Innovation Center, Ulsan, Korea (2) Pusan National University, Pusan, Korea
ABSTRACT
Among various elements to affect customer's evaluation of the automobile's quality, buzz, squeak, and rattle noise (BSR) are considered to be a mostly contributing factor. In this paper, advanced experimental methods are proposed, which can be ultimately used to reduce the squeak and rattle nose in the vehicle cabin, especially in the early design stage of development. First, to reproduce the excitation signal from roads to the instrument panel without distortion, a vibration-triggering system consisting of a fixture and an electro-magnetic shaker is newly developed. Then, potential source regions for BSR are localized using near-field acoustic holography technique. Finally, subjective evaluation of BSR from the detected potential noise source regions is made by utilizing various sound quality indexes. Based on these results, it is illustrated that the proposed experimental methods can be used as a test procedure to systematically tackle BSR issues in early stage of the vehicle development cycle, which results in the reduction of the production cost.
Acoustics and Dynamics Laboratory, Smart Vehicle Concepts Center, The Ohio State University Columbus, Ohio, USA
ABSTRACT
Direct measurement of forces (with conventional force transducers) is not practical in many real-life applications since the interfacial conditions may change. Thus indirect force estimation methods must be developed, say by using other measured signals such as operating motions or pressures; however, applicable system properties must be known a priori. The indirect force estimation methods also pose special difficulty for hydraulic engine mounts that exhibit spectrally-varying and amplitude-sensitive parameters. This paper proposes new or refined procedures that will over-come some of the obstacles and thus provide a better estimate of the interfacial forces in the nonlinear hydraulic engine mount. First, the experimental time domain data from the non-resonant dynamic stiffness test is investigated for fixed and free decoupler designs. The fundamental and super-harmonic terms in the measured force and upper chamber pressure data are compared in the frequency domain up to 50 Hz. Second, mechanical and fluid models for fixed and free decoupler mounts are employed to relate motion and upper chamber pressure to the force transmitted by using a dual transfer path approach that is based on linear time-invariant system assumption. Third, the spectrally-varying and amplitude-sensitive parameters are determined by using the transfer functions from fluid models and steady state measurements. Fourth, the dynamic force is estimated by using alternate methods that employ measured excitation motion and/or upper chamber pressure signals. To include the nonlinear effect, the effective parameters for the quasi-linear model are defined only at the fundamental harmonic. Finally, the Fourier series expansions, with embedded transfer functions in terms of force to pressure (and force to motion), are utilized to calculate the precise forces transmitted to a rigid base. The proposed procedure shows that a quasi-linear model successfully predicts the dynamic forces transmitted by the nonlinear hydraulic mount in both time and frequency domains. Ongoing and future work will be briefly mentioned in the paper.
AECOM, Melbourne, Australia
ABSTRACT
This paper describes recent research into noise emissions from trail bikes. A review of relevant noise control regulations is provided, covering the operation of trail bikes across Australia, and including regulations specific to the State of Victoria. Initial investigations comprised stationary noise testing a selection of typical bike and exhaust configurations, conducted under controlled conditions and according to standardised test procedures applicable to each regulatory framework. Results from these initial measurements indicated substantial variance between noise levels obtained according to different test procedures, even where the relevant noise limit is identical. The results also clearly demon-strated the influence of engine speed during testing, not only for obtaining repeatable results, but also for meaningful comparison of noise levels obtained for the different regulatory procedures. Following stationary noise testing, a se-lection of bikes and exhausts were subjected to measurements of noise during ride-bys conducted on a forest road, typical of riding conditions in Victorian State Forests. Results from the ride-by measurements revealed substantial in-creases in noise between stationary test results and ride-by levels. Influences of after-market exhausts were also studied, and revealed significant increases to the overall noise level and tonal characteristics of noise emitted. This research was commissioned by the Victorian Department of Sustainability and Environment, and completed by AECOM with assistance from the Environment Protection Authority Victoria.
Department of Mechanical Engineering, Naval University of Engineering, Wuhan, P.R.China
ABSTRACT
When general boundary element method(BEM) is applied to Helmholtz integral equation(HIE), integration singularity and hyper-singularity occurs. A self-adaptive Gauss quadrature algorithm was proposed to overcome the singularity. In this technique, the initial singular boundary element (father element) was divided into temporary refined small elements(children elements), and the integral on initial element was transformed to Gauss quadrature on children elements. The children elements can further be divided into smaller elements until integral solution converged at an allowable tolerance without increase boundary elements number as the refined children elements were cleared simultaneously when singular integration finished. Taking the advantages of this technique, the radiation surface can be coarsely meshed so as to reduce elements number and computational effort. Then the convergence behavior and application scope of this adaptive scheme was researched, and it is showed that this adaptive scheme can only be applied to singular or weak-singular integration. A numerical case about the sound radiation of a uniformly pulsating sphere was investigated to validate the adaptive algorithm, and numerical solutions agree well with analytical solutions with relative error less than 1.5dB. Then BEM coupled with FEM were applied to predict submarine vibration-noise considering fluid-structure interaction effects. By visualization the near-field sound pressure distribution, high sound pressure area was localized. Finally, the underwater radiated sound power was calculated and the peak frequencies were identified. Reduction of the engine periodic-isolator's stiffness can effectively transfer the sound power of peak frequencies to band-spectrum and the vibration noise of the line spectrum is controlled.
Mechanical Engineering Department, Koc University, Istanbul, Turkey
ABSTRACT
The interior noise inside the passenger cabin of automobiles can be classified as structure-borne or airborne. In this study, we investigate the structure-borne noise, which is mainly caused by the vibrating panels enclosing the vehicle. Excitation coming from the engine causes the panels to vibrate at their resonance frequencies. These vibrating panels cause a change in the sound pressure level within the passenger cabin, and consequently generating an undesirable booming noise. It is critical to understand the dynamics of the vehicle, and more importantly, how it interacts with the air inside the cabin. Two methodologies were used by coupling them to predict the sound pressure level inside the passenger cabin of a commercial vehicle. The Finite Element Method (FEM) was used for the structural analysis of the vehicle, and the Boundary Element Method (BEM) was integrated with the results obtained from FEM for the acoustic analysis of the cabin. The adopted FEM-BEM approach can be utilized to predict the sound pressure level inside the passenger cabin, and also to determine the contribution of each radiating panel to the interior noise level. The design parameters of the most influential radiating panels (i.e., thickness) can then be investigated to reduce the interior noise based on the three performance metrics. The performance metrics selected for this study are "Percentage over 80dBA", "Max Amplitude", and "Idealized Performance Error".
Design of experiments (DOE) technique was employed to understand the relationship between the design parameters and the performance metrics. The components that have the highest contribution to the sound pressure levels inside the cabin are identified. For each run, the vibro-acoustic analysis of the system is performed, the sound pressure levels are calculated as a function of engine speed and then the performance metrics are calculated. The highest contributors (design parameters) to each performance metric are identified and regression models are built. These regression models can be used in future studies to employ optimization runs to find the optimum configuration of the panel thicknesses to improve the sound pressure level inside the cabin.
(1) The University of Queensland, QLD, Australia (2) Australian Rail Track Corporation, QLD, Australia (3) Rail Corp., QLD, Australia
ABSTRACT
The high pitched noise phenomenon known as wheel squeal has plagued the railway industry for many years and is becoming more critical as railway usage increases and subjective human noise tolerance decreases. One of the perplexities of the occurrence of wheel squeal is that it appears to be sensitive to a wide range of parameters that naturally vary in the field. This research investigates the effect of changes in coefficient of friction due to humidity on the likelihood of wheel squeal events occurring. Theoretical mechanics based modeling is developed and compared to a database of field measurements of wheel squeal occurrences at a particular site in South Australia. In particular, a relatively simplified model of wheel squeal is developed based on existing literature [1-3] but notably incorporates probabilistic mechanics to account for field parameter variations and hence allows direct comparisons with field data. The underlying mechanism of wheel squeal modelled is based on a flexible lateral mode of vibration of the wheel being self-excited due to unstable lateral stick-slip contact mechanics (providing negative damping i.e. energy input). The model is then tuned to field site conditions at which over 2 million wheel passes have been monitored for a period of 3 years. The comparison indicates that field measured trends for the effect of relative humidity on coefficient of friction and hence the occurrences of wheel squeal have been able to be predicted using the very efficient model.
Further research is underway, to provide further insight and extend the predictive modelling This includes: field measurements of the angle of attack, lateral and vertical forces of passing trains; the development of an estimation method for the relative humidity of the rail based on measurements of air temperature, rail temperature and relative humidity of the air; and a full investigation of the interrelationships between humidity, the creep curve, the equilibrium cornering behaviour and the probabilistic occurrence of wheel squeal.
(1) Heggies Pty. Ltd., Sydney, Australia (2) Grocon Constructors Pty. Ltd., Sydney, Australia
ABSTRACT
Construction of a 16 level mixed use commercial and retail building known as theskyvue was recently completed at 413 George Street / 68 York Street in the heart of Sydney's CBD. The development straddles four active rail tunnels located only 2 m to 3 m below the site, making the proximity to the underground railway critical in terms of potential noise and vibration impacts. Extensive testing and modelling carried out very early in the design phase predicted that ground-borne noise levels could potentially exceed the project goals by up to 7 dBA on the lowest retail floor, and by 5 dBA on the most affected office floor. Mitigation of train induced vibration and associated ground-borne noise was achieved by designing and installing a complete vertical and lateral building base isolation system in the form of specially engineered laminated elastomeric bearings. Other constraints driving the design of the base isolation system related to the building's ability to withstand lateral loading from wind, out of balance earth pressures and potential earthquakes. This paper presents the detailed engineering methodology implemented to develop the specifications and to predict the performance of theskyvue building base isolation system. A description of the vertical and lateral isolation system is provided, together with some details on its installation. Results from compliance measurements (i) confirm that theskyvue development meets the ground-borne noise and vibration design goals and, (ii) validate the
prediction methodology with the overall ground-borne noise levels falling within 1 dBA of the predictions.
(1) Wilkinson Murray P/L (2) Faculty of Engineering, University of Wollongong, NSW Australia
ABSTRACT
Trackside systems for automatic monitoring noise from train passbys are becoming more common. Typically these will record an audio file for each passby, and download this file for spectral and other analysis. Automatic detection of the presence and level of wheel squeal from these files provides significant additional information for both operators and environmental authorities. Recently in NSW, two groups have independently developed algorithms for detecting and quantifying wheel squeal. Both are based on a Short Term Fourier Transform analysis, but details of the procedures differ. Outputs include the maximum level, SEL, duration and spectrum of squeal, and in one case also of flanging noise. This paper compares the procedures and outputs of the two algorithms, using a set of recorded audio files from train passbys. The outputs are compared with human judgements of the presence and significance of squeal, to derive guidelines for interpretation. Results indicate the potential of spectrogram-based detection techniques in this and similar applications, and also point to some practical issues associated with their implementation.
AECOM, Sydney, NSW, Australia
ABSTRACT
The force transmissibility of a track isolation system is the ratio of the force transmitted to the track bed over the force applied at the railheads. This paper focuses on the force transmissibility of a double-tie or mini-slab' floating slab track installation. The power spectral densities of the forces transferred to ground due to white noise' force sources at the rail heads are calculated. Analytical models and the Finite Element (FE) method are used to analyse a floating slab track supported on a rigid foundation. The effect of rail fastener stiffness is discussed. The influence of the flexibility of the floating slabs and the effects of a non-rigid foundation on the force transmissibility is investigated within the framework of the FE method.
Acoustics & Ultrasonic Standard, National Physical Laboratory, CSIR, New Delhi, India
ABSTRACT
The paper describes measurements of train and traffic induced ground vibrations. Vibration impacts relate to annoyance and potential for structure damage. Buildings in vicinity of mass transit projects respond to these vibrations with varying results ranging from perceptible effect, low rumbling sounds and even slight cosmetic damage to structure. So, it is imperative to ascertain the amplitude of vibrations generated due to train and traffic for characterizing the ground borne noise radiated due to it and also possible mitigation measures to be adopted either at the path or at receiver ends, if source can't be diagnosed. The present work discusses the measurements of vibration amplitude levels generated due to traffic and trains at grade level and underground metro train and correlate them with the various damage criterias for building elements.
Siauliai University, Vilnius, Lithuania
ABSTRACT
At the present moment, the railway development projects for modernization of the EU Baltic railways have been prepared. The modernization projects foresee the implementation of noise reduction measures for the newly constructed main railway lines. For that purpose, it is foreseen to design sound reduction measures for the new railway and modernization of the old railway to protect the population from the noise of the means of communication of railways. Our work presents various noise reduction measures, which, with the account of the relief of the country and the location of townships, should be designed in the modernization of railway lines. Here, beside the already known noise reduction measures, like noise screens; embankments and various constructions, the types of noise screens, intended for low-frequency noise reduction, which were studied by the authors, are presented in the work. Attention in the work is devoted to the impact of infrasound on the people living near the railway. Here, the propagation of vibrations through the ground to the residential buildings is also researched.
With this aim in view, the natural trials for clarification of vibration propagation process from the railway lines (railings) to the residential building were performed. It was established by research that low-frequency vibrations (in dependence on the ground composition) propagate quite far (up to 200 m and more) and have an effect on the foundation of buildings, which, in their turn, generate the wall vibrations in the building. Those wall vibrations excite the inaudible sounds (infrasound) inside the premises and have an unpleasant impact on the population. On the basis of those data, measures were foreseen that reduced the amplitudes of vibration propagating in the ground and thus the impact of unpleasant sounds on the residents. The results obtained will be used in the recommendations when designing noise reduction measures and expanding a railway network in the Baltic countries.
(1) Siauliai University, Siauliai, Lithuania (2) Klaipėda University, Klaipėda, Lithuania
ABSTRACT
The report deals with the opportunities for the implementation of the EU Rail Baltica strategic study project, evaluating the fulfillment of acoustical requirements for the railway route. The concept of Rail Baltica refers to the imaginative, strategic and sustainable north-south rail project connecting Tallinn in Estonia - via Latvia and Lithuania - with Warsaw in Poland.
Simultaneously, the Lithuanian Government is obligated to evaluate the possible impact of noise caused by the trains on the environment. This report is aimed at investigating the noise characteristics of railway transport facilities in their present-day and future use and analyzing the harm of its parameters to the individual and environment, clarifying what measures should be taken to reduce the impact of the possible noise on the individual and the environment.
The report provides sound pressure levels from the passing by trains, which are running and still will run until the reconstruction of the main line; also noise levels are predicted when modernizing road and railway transport means (locomotives, vans, etc.). A study is made of the spectrum composition of noise produced by the train in terms of quantity and quality of the radiated energy in respect of the human organism, i.e. not only by the sound level dBA, but also by the spectrum composition of sound pressure level. Attention is focused on the impact of very low sound frequencies and infrasound on the environment. On the basis of the measured and calculated noise levels propagated by various future railway trains, the impact of noise produced by the trains on the environment is specified by the method of analysis. Basing on the noise analysis results, measures are foreseen for the reduction of harmful noise propagated by the trains.
Heggies Pty Ltd, Australia
ABSTRACT
Rail damping is an emerging technology for mitigating airborne railway noise at the source. Rail dampers may be described as pre-formed or adjustable elements that are attached to the sides of the rails. These pre-formed elements perform the task of improving a rail's ability to decay noise‑inducing vibrations resulting from the rolling contact between the wheel and rail. The implementation of source controls such as rail dampers can potentially avoid or reduce the need to consider further mitigation options such as noise barriers and building treatments. A field trial was undertaken in cooperation with a rail damper manufacturer in order to quantify the noise reduction on a section of standard ballast track on the NSW metropolitan rail network. The results of the field trial have highlighted the complexities of selecting and tuning a rail damper for a particular track-form and minimising airborne noise emissions at the wayside. This paper presents a description of the rail dampers (as tested), the methodologies used to evaluate the rail damper performance, outcomes of the field trial and the challenges associated with undertaking such a trial within an operational rail corridor.
Sound Barrier Solutions Ltd., London, UK
ABSTRACT
The operation of a Rail-freight Terminal can have many processes associated with the loading and unloading of containers that generate noise of an intermittent or impulsive nature. In particular the use of reach stackers can make it difficult to justify night-time operation when assessing the perceived LAMax levels against the current WHO criterion. This paper examines modelling the real time performance of a noise barrier scheme around an urban rail freight terminal in the UK Midlands. It considers the typical noise signature of a train arriving unloading and departing. It also examines the processes involved in aggregate handling and the use of reach stackers and swing-through cranes for container transportation. Using the model, the worst case combination of transient noise sources was determined. The barrier design was then optimised and specified to meet World Health Organisation (WHO) Guidelines for Community Noise and BS 4142: The Rating of industrial noise in a mixed industrial area.
Heggies Pty. Ltd., Lane Cove, NSW, Australia
ABSTRACT
The Epping to Chatswood Rail Link (ECRL) commenced operating in 2009. At the Chatswood end of the project, the railway corridor contains two existing tracks (with conventional ballast and sleeper design) and two new concrete slab tracks, the design of which incorporates sections of low stiffness rail fasteners and floating slab track. During commissioning, noise measurements were undertaken at the upper floors of two high rise buildings which overlook the railway corridor. Noise from the new ECRL tracks was found to include prominent tones which added to the subjective loudness of train passbys and were approximately 5 dBA higher than the existing tracks. The noise controls incorporated into the final mitigation package included additional rail grinding and the installation of rail dampers and acoustic panels. Computer noise modelling and a simplified cost-benefit analysis approach were adopted to optimise the mitigation measures. This paper discusses the noise benefits associated with each mitigation measure and provides a comparison between the predicted and measured noise level reductions.
(1) Centro de Investigaciones Acústicas y Luminotécnicas(CIAL)-Universidad Nacional de Córdoba, Spain (2) Grupo de Investigación en Instrumentación Acústica Aplicada (I2A2-Universidad Politécnica de Madrid, Spain
ABSTRACT
This work shows the objective results of the acoustic quality of the Society of Jesus Church in Cordoba, Argentina. The acoustics of this Temple, built by the Jesuit Order three centuries ago and declared a World Heritage Site by UNESCO in 2000, is currently considered optimal by musicians as well as general public. In the second half of XVI century, with the Catholic reform, the need for improved speech intelligibility was given priority, being the Jesuit one of the orders that gave most importance to the construction of their temples. This church has constructive and spatial characteristics consistent with those needs. With the purpose of carrying out the acoustic assessment of the precincts, a work methodology that allowed comparing the results obtained from objective measures was developed by means of implementation of field measurements and space modeling, with subjective appreciation results, by developing surveys, with the aim of characterizing acoustically the sound space. This paper shows the comparison between the subjective results and objective criteria, which allowed important conclusions on the acoustic behavior of the temple to be obtained. In this way interesting data were obtained in relation to the subjective response of the acoustics of the church.
Architecture, Design and Planning, Sydney University, NSW, Australia
ABSTRACT
An ambisonic microphone was used to measure the degree to which a sound field varied with direction within a reverberant
room. The apparent diffusivity of the room was varied by incrementally adding reflecting panels, according
to AS ISO354 2008, producing seven different room states. In each reverberation time was measured using three
loudspeaker positions and four measurement microphone positions, according to the interrupted noise method outlined
in AS ISO354 2008. Recordings were made of sinusoidal sweeps for the three loudspeaker positions with a first
order ambisonic microphone at three different positions in the room. The recorded sine sweeps were converted to
impulse responses to measure the evenness of the sound field around the microphone in each room state. These results
are compared with the traditional method of establishing a diffuse state in a reverberation room with a view to
the development of a more direct method for establishing an isotropic state in reverberant rooms.
Institute for Research in Construction, National Research Council, Ottawa, Canada
ABSTRACT
This paper describes a new system of speech privacy criteria in terms of Speech Privacy Class (SPC) values. These can be used to specify the required speech privacy for new construction or to assess the speech privacy of existing rooms. The ASTM E2638 measurement standard defines SPC as the sum of the measured average noise level at the position of a potential eavesdropper outside the room, and the measured level difference between a source room average and the transmitted levels at the same potential eavesdropper location. For a given combination of level difference and ambient noise level, the likelihood of transmitted speech being audible or intelligible can be related to the probability of higher speech levels occurring in the meeting room, based on the statistics of speech levels from a large number of meetings. For a particular meeting room speech level, there is an SPC value for which transmitted speech would be at the threshold of intelligibility or even at the threshold of audibility. One can create a set of increasing SPC values corresponding to increasing speech privacy and for each SPC value one can give the probability of transmitted speech being either audible or intelligible. This makes it possible to accurately specify speech privacy criteria for meeting rooms and offices, varying from conditions of quite minimal to extremely high speech privacy, with an associated risk of a speech privacy lapse which is acceptable for each situation.
Department of Building Services Engineering, The Hong Kong Polytechnic University, P.R.China
ABSTRACT
Numerous concert halls and auditoria in Hong Kong has been built and used for decades. Most of the halls in this congested city are designed for multi-purpose use and built with balconies for maximizing the space use. While objective and subjective evaluations on acoustical properties of performance halls have been done around the world, it is time for Hong Kong to have her own systematic research. In the present study, measurements have been done in two fan-shaped multipurpose performance halls. The conditions with and without the acoustic enclosure were also studied. Dual channel dummy head was used as the receiver, while a omni-directional sound source with room acoustics DIRAC were used for MLS production and computing. Measurement points were located throughout the halls.
(1) Department of Architecture National Cheng-Kung University, Tainan, Taiwan (2) Architecture and Building Research Institute, Ministry of the Interior, Taipei, Taiwan
ABSTRACT
The serious energy and natural resource shortage that our living environment is currently facing shows a strong demand to develop a better building material certification and management mechanism. Following a twelve-year green building material evaluation and labelling research program which started around 1998, the Architecture and Building Research Institute (ABRI) of Taiwan proposed the Green Building Material (GBM) Labelling system in 2003 and was officially launched in 2004. The GBM system aims to promote a sustainable built environment for the Earth and a healthier living quality for human beings. It was established based on the ISO15686 series, ISO21930 series, ISO14040 series, as well as the Integrated Building Performance (IBP) system proposed by the EU to ensure that the evaluation criteria and standards meet the current development trends of the world. The Taiwan GBM evaluation system incorporates low toxicity, minimal emissions, low-VOC during assembly, recycled content, resource efficiency, recyclable and reusable materials, energy efficiency, water conservation, IAQ improvement, and use of locally products, among others (Froeschle, 1999). The criteria are systematically comprised of four categories, including health, ecology, high-performance and recycling. The assessment mainly adopts the life cycle assessment approach, covering four stages of the life cycle of a building: resource exploitation, production, usage, and disposal and recycling. This paper shows the sound insulating assessment as for High-performance GBM, it incorporates the well sound insulating materials, well energy saving glass and well permeable materials. By the end of April 2010, 323 Labels have been conferred covering 3000 green products. Among these products, the high-performance category occupies 14.53% (the well sound insulating materials occupies 2%) and the healthy material occupies 76.93%, followed by recycling 8.11% and ecological 0.43%. The percentage distribution indicates that the well sound insulating materials have been needed but the health issue has been highly emphasized and points out the development trend of the building material market in Taiwan. In addition, the regulation of at least 30% mandatory green building material utilization has also been involved into Taiwan's Building Code and become effective since July 2009.
Graduate Program in Architectural Acoustics, School of Architecture, Rensselaer Polytechnic Institute, USA
ABSTRACT
Most room acoustic parameters are calculated with data from omnidirectional or figure-of-eight microphones. Using an ambisonic microphone to record room impulse responses can open up several new areas of inquiry. It can yield much more information about the spatial characteristics of the sound field at the points of interest, including the diffuseness of the sound field and the directions of individual reflections. This method can also be used to produce high quality auralizations through ambisonic reproduction techniques. In this research, room impulse responses are measured in reverberant rooms used for music from stage and audience positions using a 16-channel, second-order ambisonic microphone and a dummy head. The results are analyzed using beamforming techniques and compared to those produced by acoustics models. Ambisonic and binaural auralizations are created and used in subjective listening tests to determine if and how ambisonic reproduction affects listener's judgments of different parameters and overall subjective impressions of different concert venues, with particular emphasis on apparent source width, listener envelopment, and spaciousness.
Dept. of Urban Engineering, FESBE, London South Bank University, UK
ABSTRACT
Professional orchestra regularly use the same venues whether it is for rehearsals or performances, they also use different spaces depending on the piece, usually either on stage or in an orchestra pit. The stage acoustic has been addressed many times; however, the acoustic of rehearsal spaces and pits has been largely ignored. The London Philharmonic Orchestra regularly performs or rehearses at the Royal Festival Hall, Glydnebourne, Brighton Dome and Henry Wood Hall. This paper presents room acoustics measurements, proposals and implemented solutions using noise control techniques in the four venues. The primary aim was to improve the environment for the musicians. The effectiveness of the improvements is presented.
Dept. Socio-Cultural Environmental Stud., Grad Sch. of Frontier Sciences, The University of Tokyo, Japan
ABSTRACT
In acoustic design of small enclosures, it is a considerably important matter to control eigenmodes at low frequencies, so that many researches have been done on the effect of overall shapes of rooms on the eigenmodes, such as optimization of room dimensions ratio. However, overall room shapes are usually restricted to rectangular forms due to easy construction, therefore it is desirable to improve sound fields only by changing partial elements in rectangular rooms. In the present paper, the effects of additional elements, such as columns, beams and furniture, on the sound field in a small rectangular room are investigated through wave-based numerical analysis. Supposing the room for listening use with a loudspeaker, the effects are evaluated regarding the flatness of frequency response and the uniformity of spatial distribution in a listening area. The results show that: the effect of columns is small but more than beams; that of closed-type shelves is relatively large but not so much with open-type shelves; the size and the arrangement of every element have unnegligible effects. It is also seen that the additional elements generally lead to positive effects in flatness of frequency response and spatial uniformity even at low frequencies, although in a specific case.
University of Science and Technology, Narmak, Tehran, Iran
ABSTRACT
This paper reports a study of the acoustics of the Iranian ancient music room at "Aali Ghapoo". Ali Ghapo or Aali Ghapoo is a grand palace in Isfahan, Iran. It is located on the western side of the Naghsh-i Jahan Square opposite to Sheikh lotf allah mosque. It is forty-eight meters high and there are seven floors. In the sixth floor music room, deep circular niches are found in the walls, having not only aesthetic value, but also acoustic. In this work, firstly an analysis of this music room 's acoustic simulation by ODEON, was carried out. Secondly, the influence of the decorative plaster mouldings on the acoustics support was analyzed. For this purpose, the models of the music room (with the circular niches and without them) was simulated. Finally, A comparative analysis of the both acoustic simulation and their results was carried out.
LAM, Université Pierre et Marie Curie, France
ABSTRACT
Acoustics measurements in occupied rooms are relatively rare due to many kinds of difficulties. Therefore, no protocol for acoustics measurements is available in occupied rooms. During the year 2009, the room acoustics group at LAM (Équipe Lutheries-Acoustique-Musique, Institut Jean Le Rond d'Alembert, Université Pierre et Marie Curie, Paris) performed a series of acoustical measurements in music halls in Paris. The halls were chosen in consideration of their historic, architectural or acoustic importance. The fifteen rooms selected include quite different architectural designs. The measurements were made both in empty and in occupied rooms. A particular protocol was developed for the measurements in the occupied halls, which were carried out just before real concerts. This protocol is described in this paper. Critical decisions both at technical and administrative level are discussed. Technical decisions include among others the kind and the level of the source signal, as well as the number and the selection of the recording po-sitions. Administrative decisions comprise the negotiation with the directions of the halls and the management of the public's behaviour. The main difficulties and the proposed solutions will be presented.
(1) Departamento de Música, Facultad de Artes, Universidad de Chile, Santiago, Chile (2) Acoustics Research Centre, University of Salford, Salford, UK (3) Sede Pérez Rosales, Universidad Tecnológica de Chile Inacap, Santiago, Chile
ABSTRACT
The present article presents a method to redistribute the acoustic modes of a rectangular enclosure in the low frequency range using slit resonators. The objective of the present work is to compare different strategies of optimal design in order to determine the dimensions of the resonators. The method of the finite elements will be used to model the acoustic physical behavior of the room. In addition a neuronal network will estimate the loudness level perceived by the auditor. The different strategies of design are: First, a strategy of design will be implemented based on the minimization of the fluctuations of the sound level pressure. Second, the optimization will be based on the diminution of the variations of the loudness level. Finally, two methods of optimization, genetic algorithm and differential evolution will be compared. The three different strategies from optimization will be compared generally and of it will determine the design variables that are critics in this process.
University of New South Wales, Kensington Campus, Sydney, NSW, Australia.
ABSTRACT
In the last decade a considerable amount of rating tools, policies, codes and standards have been emerging throughout the world in junction with sustainability and in particular with "green buildings". This is evident through the ongoing worldwide increase of Green Building Councils which suggests that sustainability is going to be a fundamental component in the future of the building industry. Although the concept of sustainability is complex and raises various concerns, the benefits of energy efficiency and the environmental impact reduction outweigh these issues. Other benefits related with social, cultural or economic areas are sometimes overlooked or merely weighted. Acoustic considerations, therefore, turn to be difficult to find and sometimes absent within the sustainability arena, even though it is already documented that the acoustic discipline has an enormous potential to contribute. The present paper provides a review and an analysis of the acoustics-related credits and considerations found in different rating tools such as LEED in the US, BREEAM in the UK, Green Star in Australia, CASBEE in Japan. These are one of the most recognized systems and have been exported to many other countries around the world. Other sustainable codes and standards related with sustainability are also examined and compared to identify the current role of acoustics among them and the challenges ahead for acousticians in order to increasingly integrate the acoustic discipline with the sustainability trend.
CNERIB, cité nouvelle El Mokrani, Soiudania, Algiers, Algeria
ABSTRACT
Good acoustics is essential for comfort and the productivity in work. rooms intended for the oral communications, such as the classrooms, meetings or conferences rooms are often not arranged to optimize their function. However, neglect the acoustic aspects of a classrooms or a meeting room for example, will have a imperfect communications whose consequences will largely exceed the low costs of a correct acoustic design. In these rooms, an excessive background noise or a noise due to the reverberation will reduce the intelligibility of the speech. This work treats a real case where acoustic comfort in a meeting room was evaluated through the measurement of background noise and the reverberation time. Solutions will be proposed to optimize its acoustic comfort.
Consultant
ABSTRACT
Part one presents the design of a neural mechanism for human hearing. The mechanism uses the phase coherence of harmonics in the speech intelligibility frequency range to separate data generated in the basilar membrane into several independent neural streams, each representing a sound source with a particular pitch. Examples of such sources include multiple human speakers and musicians in a string quartet. In the first step of the mechanism phase coherence of harmonics within critical bands at the frequencies of vocal formants amplitude modulate the motion of the basilar membrane at the fundamental frequencies of the sources. The non-linearity of hair cells detects this modulation and recovers the individual pitches. The mechanism is capable of detecting pitch with the acuity of a trained musician. In the next step of the mechanism the pitch of the detected modulations is used to separate each source into an independent neural stream. Each stream can then be analyzed to find the timbre, azimuth, and distance (clarity) of each source. The harmonics used for pitch detection in this mechanism are found mainly in the frequency range of 500Hz to 5kHz , which is known to be of vital importance to human hearing. The mechanism also explains aspects of harmony, as various combinations of tones produce unique patterns of output streams. An important aspect of the mechanism is that pitch detection, stream separation, timbre detection, and azimuth detection all depend on phase coherence, and all lose their acuity in a similar way in the presence of early (up to 100ms) reflections and reverberation. In a computer model of this mechanism the loss of coherence, and thus the loss of acuity, can be measured objectively. In humans the lack of phase coherence leads to the perception of distance to each source. In the computer model it leads to a hearing-oriented method of determining acoustic quality from a single channel of information.
Consultant
ABSTRACT
Human speech contains more information than is conveyed by a transcript of spoken syllables. Tone of voice and the distance to the speaker are vitally important, as they convey state of mind and degree of danger, and yet no measure exists for our ability to detect pitch and distance in the presence of reflections and reverberation. Part two of this talk uses sound clips to demonstrate how our pitch acuity decreases and the sense of distance increases as various types of reflections are added, and shows how the computer model of hearing presented in part one can be used to measure the loss. Data from several acoustic environments will be shown and discussed. We find that ambiguity of pitch perception and azimuth correlates with the perception of acoustic distance to a sound source. We believe that when pitch and azimuth cannot be accurately determined the engagement between performer and listener is lost. Instead of being perceived as "near" and thus demanding attention, the sound is perceived as "far" and can be enjoyed while thinking about something else. The sound (either music or drama) is not engaging. Old-fashioned opera houses emphasized dramatic engagement over reverberation, as do all cinema theaters. Most current concert hall and opera designs favor reverberation, and many seats are not engaging. Concert halls and opera houses that combine high engagement and sufficient reverberation exist - but they are rare. These halls maximize clarity of pitch and azimuth as well as reverberation over a wide range of seats.
Consultant
ABSTRACT
The phase coherence which is responsible for our acuity of pitch, timbre, azimuth, and distance is found in the direct sound that arrives at a listener before reflections and reverberation arrive. When reflections interfere with the direct sound they scramble the phases of all frequencies, particularly those above 500Hz where most harmonics are found. In human hearing the loss of acuity is perceived as binary. For example, the sound is either “near” or “far”; you can either detect azimuth accurately, or you cannot do it well at all. Reflected energy and reverberation is beneficial as long as it does not exceed the critical value at which phase coherence is lost. The critical value depends on the direct to reverberant ratio (D/R) and the time delay between the direct sound and the build-up of reflections. It depends only weakly on the direction of reflections. Using a single channel from the ipselateral ear of a binaural recording of speech or music the computer model of the neural mechanism presented in part one can predict whether the critical value has been exceeded. From this data we find that designs that maximize the D/R above 500Hz , while limiting the strength of reflections above that frequency in the first 100ms, result in high clarity and engagement over a wide range of seats. The features in existing halls that provide these properties will be presented, along with recommendations for maximizing engagement and reverberation simultaneously in new designs.
(1) Norwegian Academy of Music (2) Brekke & Strand akustikk AS (3) Royal Institute of Technology, Stockholm, Sweden
ABSTRACT
It is well known that plate radiation below the critical frequency is very poor, and therefore many stage floors dissipate low-frequency energy transmitted from double-bass and cello end pins rather than providing a tuning-fork/table-top effect. However, if the stage floor is well damped, so that the transverse amplitudes fade out quickly around the point of excitation, a significant net radiation can be experienced also for low frequencies, due to the piston/baffle effect. Measurements performed in the Lindeman Hall of the Norwegian Academy of Music, in Oslo, Norway, showed that vibrational amplitudes on the stage floor faded out at a nearly equal pace in all directions around the excitation points, leaving nearly circular, quasi isotropic patterns for most frequencies of interest. In the audience area no tendency of spectral roll off was seen in the low-frequency end down to 30 Hz, which may represent the lowest fundamental of modern double basses. Transfer functions from stage floor to audience (intensity vs. power, and sound pressure vs. transverse velocity) were calculated for a number of seats in the hall.
Chungbuk National University, Korea
ABSTRACT
The acoustic characteristics of the King Song-Dok bell were examined. The architectural data for bell pavilions in Silla-Dynasty were investigated through the historical references and the acoustic measurement data were collected concerning King Song-Dok bell. As a result, a appropriate model of a bell pavilion for King Song-Dok Bell was suggested as followers considering the acoustic characteristics of the bell ; 1) Sound resonance bowl must be adopted to reverberate the sound energy, 2) Lining lumbers and timbres are preferred to be used rather than wood panels in order to radiate the sound at 64Hz and 168Hz more effectively in which frequency bands most of sound energy is concentrated, 3) The ceiling should be prefabricated to diffuse the sound to air, 4) An odd number of Kan with rectangular or square layout would be preferred to be used with 12 columns which have been regarded as a traditional form for stable structure.
University of Indonesia, Depok, Indonesia
ABSTRACT
In architectural acoustics, specific acoustic characteristics and treatments are carefully observed and evaluated for MULTI-PURPOSE design. In the case of Jakarta, the acoustic observations conducted in the complex of TIM Art Centre, there is one small auditorium that frequently used for multi-purpose performances. The capacity of the auditorium is for 250 people, generally used for theater and musical drama, and sometimes are used for musical, choir, and chamber orchestra performances. The plan shape of the theater is shoebox with a proscenium stage and two tiers of shallow balconies. Recently, the electro-acoustic is used for the performances, although it is said that not necessary. The acoustic attribute judged to the theater therefore is assessed and evaluated by investigating the behaviour of sound.
This paper evaluates the acoustic properties of the auditorium in relation with its shape, dimensions, and the surrounding materials applied. The acoustic measurements involved are reverberation time. A computer simulation program is employed as well as real time measurement is also conducted. The derived acoustic parameters are then validated with the theoretical predictions, in order to check the accuracy of the measurement method. The results indicate that the RT values derived from computer simulation are shorter when compared to on site measurements and the theoretical predictions. Design of the auditorium is appropriate regarding its RT for multi-purpose performances as well as the interior design for architectural point of view. The application of electro-acoustic is therefore actually not necessary.
The University of Sheffield, Sheffield, UK
ABSTRACT
Chinese opera, with distinctive Chinese characteristics, plays an important role in traditional Chinese culture, and forms a unique part of the treasure-house in world history. Correspondingly, traditional Chinese theatrical buildings, closely related to Chinese opera, have also attracted great attention. In terms of architectural shape, they can be classi-fied into three main types: open-air theatres, courtyard theatres and indoor theatres. Among them, courtyard theatres made up the majority of all types of traditional theatre. In this paper, firstly, the categories and architectural charac-teristics of traditional Chinese theatres are reviewed. Secondly, this paper discusses the acoustic characteristics of tra-ditional Chinese theatres according to these three different types. A number of acoustic indexes, such as EDT, T15, T30, D50, C80, G, SPL, LF and STI have been analysed and compared through computer simulation.
Takenaka R&D Inst., Inzai, Chiba, Japan
ABSTRACT
A new miniature dodecahedral loudspeaker appropriate for a 1/10 -1/20 scale room acoustical model experiment was developed. This sound source consists of a PVDF diaphragm as a transducer, whose vibrating surface shape was mechanically modified so as to improve the acoustical characteristics. The reproducible frequency range is from 10 to 160 kHz with relatively flat responses, and the sound pressure is large enough to allow accurate measurements. Major acoustical characteristics of this source, some new results of the scale model experiment and the proposed signal processing algorithm are reported.
Department of Construction, Junior College, Nihon University, Chiyoda-ki, Tokyo, Japan
ABSTRACT
Roughness is now often evaluated by the surface diffusivity index, SDI, which is visual inspection by using photographs or drawings, which is not objective evaluation, proposed by Haan and Fricke. In this paper, we propose an objective evaluation method of three-dimensional roughness of wall surface. This method is realized by elliptic Fourier descriptors and extracts cycle length and amplitude of complex surface shapes, and offers characteristics of the three-dimensional shapes as spatial frequency characteristics.
Hanyang University, Seoul, Korea
ABSTRACT
The effects of stage volume and absorption for stage design on the acoustical characteristics of concert halls were ex-amined using computer simulations. A hall with different stage elements was investigated by comparing the dimen-sions of the stage and the acoustical parameters of the hall: a shoebox hall was selected with variation of stage vol-ume and absorption. Results showed that the stage volume mainly affected both stage support and audience sound strength, whereas and the seating behind platform mainly affected reverberation. Accordingly, design considerations for stage enclosures were discussed for both stage and audience acoustics.
Hanyang University, Seoul, Korea
ABSTRACT
The effects of sound strength (G) on perceived listener envelopment (LEV) at audience positions in halls were investigated. Impulse responses were measured in two concert halls of different size and an anechoic recording of violin sound was convolved with the impulse responses. The sound pressure level (SPL) was varied from 68.0 to 75.5 dBA in 1.5-dB steps. A total of 18 sound stimuli with different values of interaural cross correlation (IACC) from 0.13 to 0.57 were selected for auditory tests. Results of subjective experiments indicated that LEV was not realized when SPL was less than 70dBA, even with a small value of IACC.
Institute of Acoustics, Tongji University, Shanghai, P.R.China
ABSTRACT
In the prediction of sound decays in coupled rooms, the aperture determines the sound transmission from sub-rooms. Therefore it has significant effects on acoustics in the primary room. To accurately predict the sound decays in coupled rooms, the sound transmission coefficient of aperture needs to be considered in accordance with the wavelength, incident direction and the characteristic dimension of the aperture, especially in low and mid frequency bands. The simple assumption of transmission coefficient as unity can lead to deviation of sound transmission through the aperture in coupled rooms in computational models, such as ray-tracing and image sound source methods. In this paper, the normal incident transmission coefficient of a rectangular aperture with finite depth was calculated by the radiation impedance of the aperture, and then the oblique transmission coefficient was derived from the normal transmission coefficient by the variational formula. To describe the spatial distribution of diffraction sound energy, the diffraction intensity pattern was simulated by the classical Kirchhoff theory. The sound transmission coefficient for random incidence in coupled rooms was theoretically derived and used in the sound simulation based on acoustical radiosity method. Using this improved simulation method, the effects of aperture diffraction on sound decay in coupled rooms were evaluated with different shapes of aperture and frequency bands in a two sub-room coupled spaces. The simulated sound decays demonstrated that acoustical radiosity method considering the diffraction features on the boundaries of apertures gave satisfactory predictions when the characteristic dimension of aperture is comparable with wavelength. To validate the simulation model for sound decay in coupled rooms, experiments were conducted in a 1:10 scale model with varing aperture aspect ratios of 1,2,4. The simulated results agreed well with that from scale model tests.
(1) Center for Advanced Science and Innovation, Osaka University, Japan (2) School of Engineering, Osaka University, Japan (3) Graduate School of Engineering, Osaka University, Japan (4) Graduate School of Science and Technology, Kumamoto University, Japan (5) School of Psychological Science, Health Sciences, University of Hokkaido, Japan
ABSTRACT
Musicians and acoustic engineers are often interested in knowing how their tone quality is related to the acoustics of a concert hall. This paper reports a listening test performed to investigate whether the effect of room acoustic conditions on the timbre of musical sound differs with the performing style of the produced musical sound. For semi-anechoic stimuli, nine natural clarinet tones produced at three different dynamic levels and three different notes (A3 ≈ 220 Hz, A4 ≈ 440 Hz, and A5 ≈ 880 Hz) were extracted from the RWC music database, which stores musical instrument sounds. For reverberant stimuli, 18 tones were generated by convolving each semi-anechoic tone with two different binaural impulse responses; these were collected at different seat positions in two different medium-sized concert halls. Fifteen instrumentalists, all with at least six years of experience in playing their respective instruments, participated in the listening experiment. The stimuli were presented dichotically over Sennheiser HD650 headphones to the participants at approximately 60 dB SPL for equal loudness in a soundproof room. The scale values of timbral brightness for the respective semi-anechoic stimuli and their reverberant stimuli at equal notes were obtained through Scheffe's paired-comparison test. Three sessions were carried out, and each session consisted of stimuli with the same tone. The results showed that the room acoustic conditions significantly affect the brightness of the clarinet tones, and the effect differs with the produced dynamic level and with the produced note. These findings call for a further examination of the interaction between the performing style and the room acoustics with respect to the brightness of clarinet tones; this will be presented in Part II of this study that is based on an acoustic analysis.
Kumamoto University, Kumamoto, Japan
ABSTRACT
This study attempted to investigate the effect of sound absorption on indoor environment for infants in nursery schools. The expected effects were reduction of noise level and improvement of speech intelligibility, and consequently lowered voice level of infants, which is known as Lombard effect. Polyester fiber boards were installed on the ceilings of three classrooms of a nursery school, which are the rooms for 0, 2, and 4 years age, and the noise levels were measured before the installation and over four months after the installation. Acoustic measurements before and after the installation were also conducted to see the physical effect of the sound absorption. The averaged absorption coefficients of the three rooms before and after installation, calculated from the measured reverberation time, were from 0.16-0.17 and 0.29-0.30 respectively at 1000Hz octave band. Noise levels in a day were analyzed with the Leq of the three time periods of lunch, book reading, and nap, which represents the noisiest period with children's voice, teacher's voice, and background noise, respectively. The noise levels before the installation were around 75-79 dBA in lunch time. After the installation the lunch time noise level in 0 and 2 year age room became 6-8 dB lower than that of before the installation, while that of 4 years age room didn't change much over 4 months after the installation. Since the expected physical noise reduction by sound absorption was around 3 dB, observed noise reduction in 0 and 2 year age room was considered to be caused by the lowered voice levels of the children and teachers, which were possibly a consequence of speech clarity or Lombard effect.
(1) Hanyang University, Seoul, Korea (2) University of Sydney, NSW, Australia
ABSTRACT
Stage support for both vocal and instrumental performers was evaluated for ensemble performance in concert halls. Halls with different seating capacities were selected for measurements of stage acoustics: STEarly was measured at eight positions where musicians played for evaluation of ease in hearing oneself. In addition, mutual hearing on stage was evaluated by exchanging sources and receivers at the positions between soloist and orchestra. Stage acoustical parameters for stage support and mutual hearing are discussed with regard to the subjective tests results.
Kirkegaard Associates, Chicago, IL, USA
ABSTRACT
The challenges for an acoustician working in the Concert Hall at the Sydney Opera House are complex and profound, although identification of the acoustics shortcomings were relatively simple to determine: 1) Hearing Conditions for Musicians on the Platform, 2) Weak projection of the Chorus seated upstage of the Orchestra, 3) Lack of Clarity and definition in most audience areas, 4) Non-existent deep bass response, 5) Harshness and stridency at upper dynamic levels, especially in upper frequency range of strings and brass. 6) Distancing and immediacy caused by late-arriving reflected sound from high ceiling and remote side wall surfaces, 7) Weak loudness in distant seating areas, 8) Loud HVAC systems throughout the hall, but strongest in the chorus area behind the orchestra, 8) Need acoustics and technical flexibility to serve a broad range of popular, jazz, world music and rock performances while achieving extraordinary acoustics for orchestral and choral performances.
The causes of each of these main acoustics conditions were identified early in the firm's involvement. Finding the means to correct them has been the most challenging task. In November of 2008 a temporary experimental demonstration took place. The "saw-tooth" walls that flank the lower level of seating were covered with heavy black fabric to mitigate the high-frequency diffraction that was being generated by the saw-tooth shaping. The change was positively audible by musicians on stage as well as audiences in the stalls and circle seating areas. What was to have been a temporary installation remained in place until August 2009 when another more comprehensive mock-up of potential acoustics modifications took place. The process of identifying acoustics shortcomings and their causes will be discussed in detail together with recommended mitigation techniques. To provide assurance of appropriateness and sufficiency of recommended modifications full-scale mock-ups were constructed in the hall for live testing during rehearsals and performances. Technical Results will be discussed and related to the subjective observations presented in the previous session : Part One: The Client's Perspective. Funding of the project been delayed because of the economy, however, confidence that profoundly improved acoustics can be achieved within modest means provides the incentive to make the improvements a high priority.
Pyrotek Noise Control, Melbourne, Australia
ABSTRACT
Typically there are two types of bulk porous absorbers, cellular and fibrous. Cellular type absorbers usually cover products made from polymeric foams such as polyurethane polymers. Fibrous absorbers use inorganic fibres such as glass or basalt. Recently polymeric fibres have become more popular. The new Porous absorber is a combination of the two technologies. Using fully recycled glass as the based constituent, which is then foamed to create a porous glass bead. These beads go through a unique sintering process to bond them together to create a homogenous panel with no binders. The finished panel has a high absorption coefficient, NRC 90 and can be used in an open environment with no degradation, the panel is also non-combustible.
Brekke & Strand akustikk as, Oslo, Norway
ABSTRACT
The Norwegian National Opera House has a gross area of 38,500 m2 divided into 1,100 rooms and stage areas covering about 8,300 m2. Great care has been taken to ensure good acoustics in the main auditoriums as well as the rest of the building, including a large number of rehearsal rooms, audience areas, administrative and workshop areas. This paper presents a summary of the most interesting results and experiences from measurements outside the main auditoriums. Construction details and chosen solutions for room acoustic treatment are shown. The following areas are covered: (1)· Room acoustics / reverberation time in the rehearsal rooms, such as those for the orchestra, choir, ballet and small rooms for one or two singers or musicians. Most of these rooms have the possibility for varying the room acoustics. It was important to get the right reverberation time, diffusion and have the possibility to vary the reverberation. The measurements show that the reverberation time in different rehearsal rooms can be varied 0.2 - 0.4 seconds using curtains and banners. (2) Room acoustics / reverberation time in the audience areas such as the foyer and restrooms. In these areas, special solutions are used to integrate architectural expression and acoustic treatment.
Department of Architecture, National Cheng-Kung University, Taiwan
ABSTRACT
Air layer with irregular shape in sound absorbing structure is formed by different structure mode of materials. With building multiform interior space by materials and structure mode, it makes the shape of air layer between the facing and the structure of building to be irregular shape.
According to related study of absorbing structure, it shows less information about the influence of air layer with irregular shape. The factors of sound absorption of absorbing structure were focused on absorbing structure which facing paralleled structure of building in past research. For searching the influence of sound absorption of absorbing structure caused by the air layer with irregular shape, the subject in this study is set as the air layer with irregular shape which facing tilts with single-axis. The factors of air layer with irregular shape are the angle between tilting facing and horizontal face, the length of span of tilting facing, and if the setting is that the air layer is divided into several parts not to be interlinked. By these factors the sound absorption characteristics of air layer with irregular shape are shown.
The results of the measurement in the study are displayed in two parts. One is the influence of absorption coefficient caused by irregular shape, and the other is the influence of absorption coefficient caused by setting of air cavity. The effects of absorption coefficient caused by irregular shape mainly divided to influences of angle and span of tilting facing. In the variation of angle of tilting facing we concern, it exerts a more obvious influence on panel structure backed air layer with irregular shape, and the absorption coefficient increases as the increasing of the angle, and the effects are mainly revealed below frequency 250 Hz. In the other side of effects from span of tilting facing, the perforated panel structure is influenced remarkably at high frequencies. Furthermore, as the increasing of the span of tilting facing, the absorption of air layer with irregular shape back to the panel becomes lower. On the contrary, the absorption of the perforated panel shows less difference. In additional, the absorption of the perforated panel structure reveals relative high values to the panel structure at middle and high frequency in condition of same angle and span of tilting facing.
In the other point of effects of absorption coefficient causing by setting of air cavity, both panel and perforated panel structure the influences are influenced mainly at low frequency, especially at 200 Hz. Whether the air cavity is set or not, the panel structure reveals less influences and the absorption coefficient reduces as increasing of span at low frequencies.
Engineering Physics, Institute of Technology Bandung, Bandung, Indonesia
ABSTRACT
Angklung musics is one of the best Indonesian traditional & unique musics. It is originated in the west of Java and played by many groups and also pupils of elementary school up to students. Recently, to enrich and enhanced the quality of this musics, the need to develope a Concert Hall dedicated for Angklung musics has arised in the community. In order to design the best sound field for Angklung concert hall, a physio-acoustics investigation had been conducted to find out the most preffered' initial delay time (∆T1) between the direct sound and the first reflection of the sound signal in the hall. A piece of Angklung musics with a variation of the delay time from 20-60 ms with a step of 10 ms was simulated & presented to subjects, while the other acoustics parameters such as the sub-sequent reverberation time (Tsub) & IACC were kept constant. The response of the subjects was derived by measuring their brainwave using EEG measurement system. The power of alpha-wave were measured in the Temporal (T3-T4) and Parietal (P1-P2) region, shows that the maximum power of alpha-wave occurred when the delay time was 30 ms.
Key Laboratory of Modern Acoustics and Institute of Acoustics, Nanjing University, Nanjing, China
ABSTRACT
An image source method is presented for coherently evaluating the sound field from a point source in flat waveguides with two infinite and parallel locally reacting boundaries, where one is sound absorbing and the other is reflective. The method starts from formulating sound reflections into integrals by plane wave expansion, and the inherent intractability in solving these integrals in such spaces is avoided by introducing a physically plausible assumption that wave fronts remain the same before and after the reflection on a near-rigid boundary. By comparisons to the classical wave theory and the existing coherent ray-based methods, it is shown that the proposed method is considerably accurate to predict the sound propagation in flat waveguides with a sound absorbing ceiling and a reflective floor over a broad frequency range and for various source/receiver geometries, even if at large distances from the source compared to the waveguide height while the existing methods are shown to be erroneous.
(1) Faculty of Mechanical Engineering, Universiti Teknikal Malaysia Melaka, Hang Tuah Jaya, 76100 Durian Tunggal, Melaka, Malaysia (2) School of Engineering, Taylor's University College, 47500, Subang Jaya, Selangor, Malaysia
ABSTRACT
Sound absorption coefficients of materials are one of the most important information in order to determine the reverberation time of an enclosure. There are published data inside text books on sound absorption coefficients of typical materials. However, the sound absorption coefficients of Malaysian wood such as Chengal, Meranti, Nyatoh and Keruing, just to name a few, have not been established yet. Therefore, this paper presents preliminary results of the sound absorption coefficients of 100 types of Malaysian woods. Initial work has been carried out numerically using MATLAB based on the Delaney-Bazley approximation method. In general, it is found that at lower frequency (<500 Hz), as expected, the sound absorption coefficient of Malaysian wood is low and at higher frequency (>500 Hz), the sound absorption coefficient is high. Moreover, at higher frequency, with higher density value, the sound absorption coefficient of one species is lower compared to another species that has lower density. Further work will be continued by doing experimental justification in comparison with the numerical results.
(1) Akukon Oy Consulting Engineers, Helsinki, Finland (2) Akukon Oy Consulting Engineers, Tallinn, Estonia (3) Kahle Acoustics, Brussels, Belgium (4) Moscow P.I. Tchaikovsky Conservatory, Moscow, Russia
ABSTRACT
The Moscow P.I.Tchaikovsky Conservatory is located in the center of Moscow and is situated in a 19'th century building. The Great Hall of the Conservatory is properly the most loved hall in Russia by both musicians and audience, both for its visual appearance but not least for its acoustic conditions. The hall will be renovated for the Tchaikovsky competition in 2012. In connection with the renovation design, the acoustic conditions in the hall has been measured, both by objective measurements in the hall as well as by questionnaires both for the orchestra and for the audience. Also a 1:20 scale model has been made and the geometry of the hall has been studied in computer models. This paper will present the acoustic conditions of the hall based on these studies.
(1) Applied Physics Dpt., Polytechnic University of Valencia, Spain (2) IRTIC, University of Valencia, Spain
ABSTRACT
Acoustical simulation is an important issue in room acoustics since algorithms and computers allow developing acoustic numerical models. Through this process, it is possible to obtain acoustical parameters from any environment, whether it is already built or in its design phase. From these parameters, the acoustical characteristics of a room can be improved and it is possible to test the effect of any change. In our work, we are focused on the simulation of geometrically complex rooms.
When doing acoustical simulation, we have to build properly the geometrical model, but if there is any error -made by the modeler or in any conversion-, it must be corrected in order to achieve an accurate model. Having a good geometrical model of the room is essential, but in some cases this is not possible. Moreover, when using simulation software, the geometrical model is imported, in most cases, from other modeling software (CAD modeling), and this process can lead to some conversion errors. Up to now, these errors were corrected by hand, but it might be a tedious process when working with highly complex buildings. We propose a tool to automatically reduce the geometrical errors derived from such complex models. We start from a debug file which includes a list of geometrical errors detected by the acoustical simulation program and these are corrected in an iterative process between our tool and the simulation program.
Kyoto University, Japan
ABSTRACT
Multilayer leaf structure has been applied in many fields. However, this type of structure has a problem of sound insulation deficit at low-and mid-frequencies by mass-air-mass resonance. Meanwhile, micro-perforated panel (mpp) can provide good absorption in the wide frequency range as reported by Maa, and it is recognized as the next-generation absorption materials. Several papers reported that sound insulation deficit at low-and mid-frequencies of multilayer structures can be improved by using mpp. However, in these studies, mpp is set in the air layer having less effect on the improvement, and the prediction methods do not well correspond with the experiment. In this study, a new prediction method is suggested for sound transmission loss (STL) of multilayer structures with the flexible mpp of infinite extent. STL is calculated by analytical model that considers an oblique plane-wave incidence. This method is based on both wave equation and an equation of panel vibration, and considers effects of micorperforations at each boundary in the coupled analysis of wave motion and panel vibration. In addition, considering directional distribution of incident energy in a reverberation chamber, the average sound transmission loss of multilayer structures is calculated. In order to verify the sound transmission theory with the flexible mpp, the theoretical model in cylindrical coordinates is also presented where panels are clamped at edge. This sound transmission theory of finite extent is experimentally investigated by using an acoustic tube.
Queensland University of Technology, Brisbane, Qld, Australia
ABSTRACT
A combined specular reflection and diffusion model using the radiosity technique was developed to calculate road traffic noise level on residential balconies. The model is capable of numerous geometrical configurations for a single balcony situated in the centre of a street canyon. The geometry of the balcony and the street can be altered for width, length and height. The surface absorption properties of the balcony and street surfaces can be configured for numer-ous scenarios. The model was used to calculate for three different geometrical and acoustic absorption characteristics for a balcony. The calculated results are presented in this paper.
(1) Shizuoka University, Japan (2) Tomoegawa Co., Ltd., Japan
ABSTRACT
We search materials which are hard but can absorb sound quite a bit, or are hard and do not affect the sound transmission. For this aim, we measured normal incident absorption coefficient of perforated plates with/without glass wool absorption materials by transfer function method. The perforated plates include commercial aluminum perforated plates with 22.6% and 8.2% perforation rate, and aluminum plates with 1% perforation rate. The thickness of the plates is 0.5 mm to 2 mm. The diameter of the holes in the plate is 0.5 mm or 1 mm. For comparison, the plates with a single hole of 22.6%, 8.2% and 1% perforation rate were measured. From the results, it is implied that the perforated plates act as low pass filters and their cut-off frequency is due to the perforation rate and the thickness of the plates. If the perforation rate is the same, the cut-off frequency of the perforated plate is higher than that of the plate with a single hole. For simulations of this phenomenon, we analyzed using both FEM and electrical equivalent circuit model of the tube. It is shown that the particle velocity in the holes of the plate is higher than the other part. It is shown that for the perforated plate, wave front passing through the plate is not largely changed by way of comparison of that for the plate with a single hole.
Dept. of Architecture Technology, Hiroshima International University, Hiroshima, Japan
ABSTRACT
For improving sound insulation of a partition, it is often laminated with multi-plates. Although it is hard to solve theoretically a transmitting sound field through the laminated partition with its motion equation was shown to predict the transmission loss if the laminated panel is treated as a homogeneous panel. The previous study assumed an estimation method for parameters of the laminated panel as follows: (1) Young's modulus is obtained as a combination of that based on a geometrical moment of inertia of each plate and averaged Young's modulus by plate's thickness. (2) Loss factor of the laminated panel was obtained as an averaged loss factor weighted by panel's thickness. In this study, the author suggests an estimation method for parameters of the laminated panel as an application of the well-known Ross-Kerwin-Ungar model. The laminated panel spot screwed or spot gluing with two plates is modelled as three layers, which consist of two plates and a boundary layer between them. The boundary layer is assumed to have loss elastic modulus and very little thickness. The estimation method is verified and discussed by comparing estimated values with measured values of the parameters of the laminated panels.
(1) Faculty of Science and Engineering, Waseda University, Tokyo, Japan (2) Acoustic Laboratory, Waseda University, Tokyo, Japan
ABSTRACT
For educational environment design of nursery institutions focused on sound environment, we report results of sound environment investigation of facilities and experimentally examining environmental design that focused on acoustical characteristics. To understand environment from children's view that is different from adult's view and feedback to design, we examined the design focused on children's hearing height and behavioral pattern. Children's hearing point is greatly influenced from reflected sound of the floor and nearby furniture, so we tried acoustic design that we used two kinds of carpets. We measured the change in the acoustical characteristics as a difference of reverberation times that was especially 1-4kHz sound absorption effect and these had a marked effect in lower height. When we actually put down the carpet at the activity time in the kindergarten, it was suggested that the sound absorption characteristics of the carpet influenced the sound environment a little in the activity scene like class that the teacher mainly led learning activity in nursery room. The other side, in the free activity scene that children independently act, relations between children and the environment changed and how to use the nursery room has changed greatly. Although there were few children who stayed and acted for a long time in the usual situation, it was observed that the room was actively used as a place where they make-believe play when we put down the carpet A with short wool and dynamically acted like running or jumping when we put down the carpet B with long wool. As a result, it was observed that the characteristics of the actual sound environment changed greatly by the change in the activity sound caused behavioral change more than the influence by the characteristics of the carpet.
Brekke & Strand akustikk as, Oslo, Norway
ABSTRACT
Oslo's opera house, new in 2008, has several large glass areas in the facades. The site is located in an area with heavy traffic. At the design stage, it had to be taken into consideration that the opera house might have to be useable with traffic noise at a level of Laeq = 70 dB outside through a period of many years. The façade to the east has large glass areas for several rehearsal halls. Sound insulation against traffic noise into these halls is critical for the success of the opera project and had to be tested. Three of these halls are on level 4 with no access to the outside of the building. The field testing of the actual sound insulation in the finished building presented severe practical challenges. It was not practically possible to measure outdoor and indoor levels simultaneously. Thus great care had to be taken to insure that the sound field was as similar as possible during outdoor and indoor measurements. Two sets of amplifier and loudspeaker were needed to achieve a sufficiently diffuse and repeatable sound field on the outside of these facades. The paper will deal mainly with a description of the practical arrangement of the measurement and the results of the measurement. Other aspects of sound insulation in facades have been treated in earlier papers which will be referred to.
Renzo Tonin & Associates, Sydney, Australia
ABSTRACT
Small music practice rooms for non-amplified musical instruments are essential requirements in the teaching of music in music education facilities. The requirements for wall partitions and doors sound insulation performance for music practice rooms are usually the primary consideration and generally well understood. In this paper, the focus is on the sound quality within the music practice room as perceived by the music student and teacher. The size, shape and finishes of the small music practice rooms decided at the design phase would determine the final cost, floor areas utilised and resulting acoustic quality of the built music practice rooms. This paper reviews the various options for the design of music practice rooms for specific musical instruments and for multi-purpose use. The determination of music practice room sizes, proportions, shapes and finishes and their potential impact on the sound quality of the rooms are discussed. Issues regarding standing waves, room modes and the even distribution of the modes in small music practice rooms are also addressed. The various methods of varying the reverberation times and diffusivity in the music practice rooms with the use of alternative room elements and finishes are reviewed and discussed in the paper.
University of Canterbury, Christchurch, New Zealand
ABSTRACT
The noise absorbing properties of two and three-dimensional contoured foam absorbers were investigated. The sound absorption of five differently shaped foams was measured in a reverberation room and comparisons made to a plain foam of equivalent volume. The effect of painting the foam surface and fabric coverings on acoustic performance were also investigated. The amount of absorption can be related to the volume and surface area of each foam. It was found that painting the absorbers had very little effect on their acoustic performance. Two and three-dimensional finite element models are being developed to further investigate the effect of surface shape on absorption.
Mueller-BBM Munich, Germany
ABSTRACT
The theatre "Teatro di San Carlo" in Naples, built in 1737, is the oldest opera house still active in Europe. After a devastating fire in 1816, the theatre has been rebuilt by Architect Niccolini and is being considered, until today, one of the most beautiful semi-classical opera houses in the world. Also, from the acoustic point of view, it is reputed to be one of the best theatres in the world. These excellent characteristics had to be maintained and in no way affected by the extensive restoration and enlargement work which was carried out during a period of a year and a half, until January 2010; on the contrary, if possible, the acoustics were even to be improved. The restoration of the auditorium in terms of room acoustics, the creation of new rooms above and under the auditorium as well as the installation of an air-conditioning system were, among others, the object of the acoustic project of Müller-BBM. The activities carried out as well as the obtained acoustic results are being illustrated in form of objective measurements and subjective comments of the artists and the opera lovers.
Juergen Reinhold Mueller-BBM
(1) University of Pavia, Via Ferrata 1, 27100 Pavia, Italy (2) Tecnasfalti Srl, Via dell’Industria, 12, loc. Francolino 20080 Carpiano (Mi), Italy
ABSTRACT
Nowadays building conservation and refurbishment draw the attention of the world we live in. In particular, in the public sector, the change of occupancy is commonly used in order to maintain the existing functional layout of spaces and the original structure of the building. Further improvements need to be also considered in order to save the indoor environmental quality. A case study is provided below by the analysis of acoustical performances of an auditorium in Italy, the historical S. Giorgio Palace in Genoa. The palace was built in 1260 and it was the most important public palace in the town; afterwards it became the headquarters of the Port Authority in 1903. Although the high reflective materials covering the interior surfaces provide high values of reverberation time, the hall is mainly used as a conference hall. The acoustical project of restoration, approved by the Ministry of Italian Cultural Heritage, allows only the application of woven materials for floor and curtains, which can be easily removed in case of a change of destination to respect the historical and architectural value of the hall. Acoustical measurements, by means of the impedance tube, have been performed up to now in order to define the best woven materials to improve the overall acoustic performances of the hall. The normal incidence sound absorption coefficient of different samples of carpet have been tested. A procedure for the samples location in impedance tube measurements has been outlined. Carpet is a textile material with a good sound absorption, mainly at high frequencies. In order to improve its acoustic properties at low frequencies a multilayer system composed of carpet and felt having different characteristics have been experimentally investigated and the optimal configuration has been defined.
Graduate Program in Architectural Acoustics, Rensselaer Polytechnic Institute, New York, USA
ABSTRACT
Much architectural acoustic research is devoted to a single source in a room. However, most situations involve competing
sources. One context for understanding competition between multiple sources is operatic performance. Here, spectral
and level differences between the singer on the stage and the orchestra in the pit, as reflected in the parameter Balance
(B), allow the listener to discern both sources, but this is not a complete understanding of the situation. Human hearing is
sensitive to more characteristics of the sound field than relative levels. Reflected sound, shaped by the acoustic enclosure,
provides the auditory system information that can modify enjoyment and understanding of the signal. This research
extends the single source room acoustic parameters Clarity (C), and Inter-aural cross-correlation(IACC) to multiple
sound sources by examining their stage to pit ratios. In an opera house, there are specific surfaces which provide key early
reflections to the audience from the singer, and separate surfaces that reflect sound from the orchestra. These surfaces
can be manipulated separately in order to adjust the parameters of the singer's and orchestra's sound fields separately.
This study utilizes acoustic modeling and subjective testing to investigate architectural and parametric configuration for
listening to opera's multiple sources.
Architectural Engineering Program, Univ. of Nebraska-Lincoln, Peter Kiewit Institute, Omaha, USA
ABSTRACT
This study investigates the relationships between several classroom acoustics parameters and student achievement. Detailed binaural room impulse response measurements were conducted in four elementary school classrooms in a midwestern public school system in the United States. Unoccupied background noise levels were also recorded in these spaces. Previous studies have compared how different room acoustics metrics predict speech intelligibility while another investigation examined perception-based binaural metrics in a typical classroom. This study extends these previous research areas by comparing both binaural classroom acoustics metrics and unoccupied background noise levels to each other and to the standardized student achievement scores of students in the surveyed classrooms. The binaural metrics examined include interaural level differences, interaural cross-correlations, and comparisons of the speech transmission index and frequency-to-frequency fluctuations between the two ears. The results will indicate which classroom acoustics parameters, if any, are most strongly related to student achievement.
Armstrong World Industries, Lancaster, Pennsylvania, USA
ABSTRACT
With our increasing sensitivity to the impact of buildings (schools) on the environment, worldwide, we have moved towards goals of sustainability and green design. Early implications of green design' strategies indicated that design decisions should not be made solely on the basis of sustainability and energy conservation. But rather, these goals need to be pursued in conjunction with building IEQ issues to ensure that occupants/users of these spaces will have a healthy and productive environment. Survey results from occupant satisfaction surveys such as by the Center for the Built Environment (CBE) from the University of California at Berkeley initially showed that green' buildings (LEED rated) generally did not fare as well as traditional buildings relative to occupant satisfaction with acoustic performance.
More recently, the various green rating systems used around the world address acoustics within the framework of the ratings either as prerequisites, or for enhanced credit points. School buildings are designed and built specifically for the purpose of educating students, where teachers teach and students learn primarily on the basis of verbal and visual teaching cues - obviously a primary goal must be acoustic performance. Acoustic requirements in various green rating systems for school classrooms including LEED for Schools, Green Star for schools, etc., are reviewed.
Also addressed are the IEQ-Acoustics issues in schools, and how those are being handled in terms of performance factors linked to speech intelligibility, including sound clarity, background noise, S/N ratio, STI, etc. Architects and interior designers need to understand that speech intelligibility within a classroom is dependent on sound quality, which is dependent on the signal-to-noise ratio. The signal is based on the sound clarity which is determined by architectural design factors include classroom size, shape, and surface treatments. Sound clarity is thus determined by the architecture, and it will not change unless the architecture is changed. Background noise on the other hand, is based primarily on the factors of exterior (environmental) noise intrusion, and interior HVAC noise. Noise changes all the time, so listening harder or longer may actually enable some degree of understanding.
The acoustic design objective for classrooms must involve designing for speech clarity with architecture, and protecting the speech clarity by ensuring good mechanical design to limit the background noise. Case studies of classrooms from around the world will be presented showing the impact on both the teachers and students based on the acoustic performance of their classrooms.
Engineering Physics, Institute of Technology Bandung, Indonesia
ABSTRACT
It is important for a classroom to have a good acoustical, lighting, and thermal condition. All of these aspects need control components that occupy room surfaces. Thus, if these two of three aspects can be combined, their occupancy on surfaces can be effectively arranged. This paper presents the integration of luminare-diffusor to control lighting and acoustical performances in a classroom model. Experiments were carried out using four luminaries attach with a variation of diffusors attached on it is surface. There are four diffusors variation, those were QRD N7 dmax 15 cm, QRD N7 dmax 10 cm, QRD N11 dmax 15 cm and QRD N11 dmax 10 cm. All these diffusors were designed to diffuse sound signal on human speech frequencies. Photometric data of the luminaries were taken before and after the integration in order to analyze the significant difference between the two conditions. The diffuse coefficient measurements were also taken to distinguish the quality of the acoustical performance. Result shows that acoustic diffusors variation was not significantly change the lighting intensity distribution and the average illuminance on the work space. It was also shown through simulation that the integrated system can be used to increase the acoustics performance of a classroom.
(1) IRTIC, Universitat de València, Spain (2) Applied Math. Dpt., Universitat Politècnica de València, Spain (3) Applied Physics Dpt., Universitat Politècnica de València, Spain (4) Phisiology Dpt., Medicine Faculty, Universitat de València, Spain (5) Basic Psycology Dpt., Universitat de València, Spain (6) Applied Physics Dpt., Universitat de València, Spain
ABSTRACT
The impulse response in a room is an important feature to determine the main acoustic parameters in a measurement point of a room or environment. This characteristic is dependent on the location where the receiver is located. The precise determination of this attribute can be integrated into a complete auralization process in order to obtain a real-time system. Nowadays, many audio applications takes into account the IR for developing audio application in several fields as virtual reality environments, teleconferencing, spatialization of sound, etc.. In this work, we have determined (by measuring and simulating rooms) the BRIR in several locations of two rooms of different sizes, geometries and purposes. The rooms measured and simulated are: a church and a multi-purpose room.
Institute of Acoustics, Tongji University, Shanghai, P.R.China
ABSTRACT
A wooden micro-perforated panel (W-MPP) with dual function of acoustics and decoration is presented in this paper. Sound absorption characteristics of W-MPP are theoretically predicted and compared with the measurement results in impedance tube. The paper further discusses the effect of slot depth and stripe width on absorption characteristics. The manufacturing process of W-MPP is simple and easy to operate. Finally, an actual example of cinema acoustic design shows the capability of interior decoration of W-MPP besides its acoustical properties.
Brekke & Strand Akustikk, Oslo, Norway and AKUTEK (www.akutek.info)
ABSTRACT
Wallace Sabine introduced the reverberation time (RT) as a measure of acoustic conditions in rooms a century ago. After some decades of experience with RT it became evident that two rooms with similar RT could still be sounding differently. Until today, a large number of different parameters have been suggested to describe these differences. In an attempt to settle for a limited number of listener aspects, and a limited number of physical measures associated with each of them, a set of five aspects have been suggested. In the ISO standard 3382-1, the RT is not included in the group of physical measures associated with listener's aspects. It is tempting to jump to the conclusion that the reverberation time era has come to an end. However, from statistical analysis of 126 measurements in 11 European concert halls it is shown that RT is the underlying acoustical parameter governing 4 out of the 5 listener's aspects included in ISO 3382-1. In the work reported in this paper, it is concluded that the 4 listener aspects Level, Reverberance, Clarity and Listener Envelopment can be predicted from RT, volume and source-receiver distance. Thus the statement that RT is the underlying parameter of 4 of the 5 listener aspects still holds. Further investigations with more data should be carried out to increase the statistical confidence of the results.
Brekke & Strand Akustikk, Oslo, Norway and AKUTEK (www.akutek.info)
ABSTRACT
Room acoustical parameters for concert halls are basically designed to describe significant listening aspects of a room. Correlation between subjective ranking of concert halls and their measured parameter values, averaged over the seating area, have been found. However, results from simulations and measurements indicate that due to spatial variations, very few listeners will actually be in a position where the set of five parameter-averages can be experienced. Therefore, this author have pursued the possibility of explaining subjective ranking of concert halls by objective conditions at listeners' ears, as reported in this paper. It is concluded that Beranek's rank-ordering of nine halls can be explained by objective acoustical conditions at the ears of listeners seated in the better 2/3 to 3/4 of each hall. Explanation degree up to Rsq = 0.94 is found with a set of five parameters. Predictability was improved when excluding one of the parameters. Some of the other conclusions are: The ranges of parameter values associated with good listening quality turn out to be strikingly large in terms of noticeable differences. Since it is crucial to be able to predict subjective quality during concert hall planning, the search for significant parameters and optimal combinations of these should continue in further work. More halls should be included in an extended study. Linear regression should be handled with care.
(1) National Institute of Advanced Industrial Science and Technology (AIST), Japan (2) University of Tokyo, Japan (3) South China University of Technology, P.R.China (4) Kobe University, Japan
ABSTRACT
After the Second Vatican Council, the Catholic liturgy has changed. A priest is facing the congregations and with his back to the congregation as before. However, the acoustical change has been not examined yet. To discuss the desirable acoustical conditions of a church, it is necessary to know acoustical characteristics in accordance with such Catholic liturgy. In this study, acoustic measurements were conducted with various sound source positions and directions following the old and new Catholic liturgy in four churches in Nagasaki, Japan. The source was directional loudspeaker and placed 1.5 m above the floor similar to the speaker. Binaural impulse responses were measured using a dummy-head microphone with a sinusoidal signal with an exponentially-varying frequency sweeping from 20 Hz to 20 kHz. The dummy head was placed 1.2 m above the floor at twelve to fourteen receiving positions in each church. Acoustical parameters such as listening levels (LL), initial time-delay gap (ITDG), reverberation time (RT), and interaural cross correlation coefficient (IACC) were analyzed based on the binaural impulse responses. All results in new Catholic liturgy condition, where the source is facing the central nave, showed that the values of LL was larger, the values of ITDG was shorter, and the values of IACC was higher, than those in the old Catholic liturgy condition. These clearly indicate that the change of the Catholic liturgy made worse the acoustical characteristics in a church, so that substantial acoustic improvements, particularly in the frontal part of the church are strongly recommended.
Faculty of Architecture, Design and Planning, University of Sydney, NSW, Australia
ABSTRACT
This study evaluates an approach to providing absorption to a hard-surfaced rectangular room. The concept is that planar cavities are established behind each room surface, with narrow openings to the room around their edges. While there is some potential for tuning such cavities to the axial and tangential modes of the room, such precise tuning may be impractical - so we examine the broad effect of this approach. The concept is evaluated in a scale model room by measuring and comparing the transfer functions between fixed transducers (in room corners) for various interventions. These include the size of the opening around the edge of the planar absorber, the depth of the planar absorber, and the presence or absence of resistive material within the plane.
(1) Department of Architecture, Middle East Technical University and MEZZO Studyo Ltd., Ankara, Turkey (2) Department of Mechanical Engineering, Middle East Technical University, Ankara, Turkey
ABSTRACT
Heydar Aliyev Center in Azerbaijan, Baku is an architectural landmark in terms of its symbolic contribution to the city. Contemporary organic architecture of the building, having the signature of the architect Zaha Hadid, ends up with no corners or rectilinear surfaces. Acoustical excitement starts up with the challenge of solving out the acoustical defects related to those mostly curvilinear forms and highly reflective surfaces. Inner galleries having reverberation times up to 10 s are aimed to be taken under within limits of 2.5 s for optimum foyer acoustical characteristics. The challenge starts with the architects no compromise on any architectural design visual alteration. So as to satisfy the sound absorption coefficient values for desired acoustical performances with architecturally transparent materials that cause the least visual modification, much effort has been spent on the compromise in between materials' visual and acoustical features. Acoustically transparent materials are studied under the consideration of architectural continuum. Acoustically transparent plasters with the least visual dissimilarities in comparison to regular paints or plasters are searched. These materials are used as a finish material over perforated backing. This paper mainly discusses about the acoustical performance of wall construction systems for such a challenging design for both satisfying the energy decay in terms of having the optimum sound pressure levels indicating noise levels, and satisfying sound intelligibility characteristics within given such scheme so as to provide acoustical comfort for the users.
Shimizu Corporation, Institute of Technology, Tokyo, Japan
ABSTRACT
Many recent studies have used multi-moment methods such as the CIP (constrained interpolation profile) method for the analysis of acoustic wave propagation. The CIP method combines the method of characteristics and polynomial interpolation. This method has less numerical dispersion and is more stable than the FDTD (finite-difference time-domain) method. However, using the CIP method, numerical dissipation often causes a reduction in calculation accuracy. In order to reduce dissipation, we apply this method using interpolation by a fifth-order polynomial. However, as this scheme uses the physical values and their first- and second-order derivatives at the two nearest grid points, the computational load increases slightly. We propose a new algorithm to reduce memory requirements and examine the applicability of this scheme by computing wave propagations in two- and three-dimensional space. In this paper, we first derive the characteristic equations for acoustic waves using this scheme and then propose our new algorithm to reduce the memory requirement. Finally, we show some results of numerical simulations.
(1) Department of Physics, Nagpur University, Nagpur, India (2) Department of Electronics, Nagpur University, Nagpur, India (3) Laboratory of Acoustics, Faculty of Engineering, University of Porto, Porto, Portugal
ABSTRACT
The human whistle is a representation of the human vocal singing. Singing (solo and congregational) is an essential component of sacred music for collective worship in a Catholic church. The acoustic characterization of sacred music is defined in this paper through a derived Acoustic Comfort Impression Index (ACII) and several Acoustic Worship Indices (AWI), namely, Subjective Sacred Factor (SSaF), Subjective Intelligibility Factor (SInF) and Subjective Silence Factor (SSiF). In this study, live sacred music rendered by the human whistle is compared with that by the cello, clarinet, violins and the ensemble, in the Catholic church of the Divine Providence (Goa, India). Among the significant results, ACII for the human whistle was found to be better than ACII for the musical instruments (F = 2.38, p = 0.08); this difference was more significant at the nave of the church (music source) (F = 2.94, p = 0.04) and lower at the choir loft (music source) (p = 0.21). SInF for the ensemble music was found better than SInF for human whistle (F = 3.07, p = 0.03). At the nave of the church, the SInF was found better than SSaF and SSiF (F = 4.17, p = 0.02). SSaF and SInF were equally better than SSiF at the choir loft (p = 0.02). This study opens the possibility of optimized use of the human whistle in rendering sacred music in a church.
Strategic Planning Building Development & Maintenance, The Sydney Opera House, Australia
ABSTRACT
The Sydney Opera House designed by the late Danish architect, Jørn Utzon in 1957, is a vibrant complex of performance spaces enclosed by one of the world's most iconic structures. Completed in 1973, the interior of the Concert Hall was designed by Peter Hall after Utzon had returned to Denmark because of great controversies over cost over-runs. The Acoustician for the Concert Hall was Wilhem Lassen Jordan. Since the first concerts in the hall, musicians onstage and critical listeners in the hall acknowledged the hall's acoustics deficiencies. However for many visitors, the extraordinary architecture of the building was sufficiently impressive to cause the building to be listed as one of the world's 10 best concert halls. Finding the means to remediate the acoustics shortcomings in this Heritage-listed world landmark has been challenging. Over a long period of time it has involved acquiring expert advice and balancing that advice with financial and artistic resources as well as the enhanced functionality and ultimate quality to be realized. All this will have to be tempered by historic imperatives, restricted budgets, and seemingly conflicting goals. Mock-ups of acoustics modifications were conducted in late 2009, The author will describe the process of gaining authorization for the mock-up as well as musician and audience reactions to them.
Peutz bv, The Netherlands
ABSTRACT
Many spaces have curved walls or ceilings. With improved building technology and new fashions in architecture (blobs) there is an increasing number of problems due to the acoustic reflections by these surfaces. Sound reflected by concave surfaces will concentrate in a narrow area. In practical applications of room acoustics these curved surfaces will be calculated with a geometrical approach, mirror imaging, ray tracing or beam tracing. In computer programs the structure is modeled by flat segments. These geometrical methods do not correspond to reality. The only valid calculation method is the calculation from a wave extrapolation method. It is shown that a theoretical correct solution of the sound field by curved surfaces is possible. A fairly simple expression for the sound pressure in the focal point is found and a more complicated description of the reflected sound field by small curved surfaces is presented. An engineering method is presented to estimate the sound pressure due to focussing effects. Some practical examples will be shown.
Acoustics Prog. and Lab., Mech. Eng. Dept., Univ. of Hartford, West Hartford, USA
ABSTRACT
The just noticeable differences (JND) of room acoustics parameters are useful quantities in design and research, as these values provide a guideline as to when a design change will result in a subjectively noticeable difference. The clarity index for music (C80) JND has been studied previously by Cox et al (1993) and Bradley et al (1999), who found C80 JNDs of 0.7 dB and 0.9 dB, respectively. These studies had limitations in that Cox et al had a relatively small subject pool and Bradley et al's study used speech signals rather than music, as the focus was C50. Two new studies have been conducted to further investigate the C80 JND. In Study 1, 51 musically trained subjects were exposed to a total of 54 AB paired comparisons producing results suggesting a higher JND of 1.6 dB. A pilot study, Study 2, was conducted to compare two testing methods: Test Method 1, which was used in Study 1, required the 11 subjects to listen to all of signal A and then all of signal B before giving a response, while Test Method 2 allowed subjects to switch in real-time between the two signals, as it was hypothesized that Test Method 2 would yield a lower JND similar to previous work. However, this study yielded an even higher C80 JND of 3.8 dB averaged over both test methods. In particular, an interaction effect was found with test method and the order in which the subjects received each test method. The results that most closely matched the predicted trendline were obtained when the subjects completed the first half of the test using Test Method 1 and the second half using Test Method 2. If the first half of the test was considered a training period, then the results in this case from only Test Method 2 gave a C80 JND of 4.4 dB, which was much higher than found in previous work.
(1) Dipartimento di Ingegneria, Università di Ferrara, Italy (2) Institut Pprime, CNRS - Université de Poitiers - ENSMA, UPR 3346, France (3) Laboratoire Central des Ponts et Chaussées, Bouguenais, France
ABSTRACT
This work focuses on the calculation of net intensity vectors in rooms, by using two different methods: a geometrical
method, based on particle tracing, and the room-acoustic diffusion theory. The classical assumption for diffuse sound
fields is that the net flow of reverberant energy at any location in room, i.e., the reverberant intensity vector, is null. The
reverberant field in rooms with homogeneous dimensions and uniform absorption coefficients is usually considered as
diffuse. This study focuses first on the spatial structure of the intensity vector field in such rooms, showing that, although
the energy density variation is weak, an organized structure of energy flows can be observed throughout the room. In a
second part, the net intensity field in more complex rooms, such as, for example, long rooms, will be investigated in
the same way, for both diffuse and specular reflections, with the aim of providing numerical estimations of the sound
intensity field and of the room-acoustic diffusion coefficient.
Institute of Technical Acoustics, RWTH Aachen University, Germany
ABSTRACT
In ISO 3382 IACC is identified as a single value parameter to predict the perception of spatial impression in auditoria. Although the perceptual relevance of interaural cross correlation has been shown by different researchers, the general significance of IACC measurements in auditoria is still being discussed. One of the questions that remain is, for instance: which are the relevant frequency bands that should be used to evaluate IACC. Moreover , the usage of IACC measurements to draw conclusions on the acoustic properties of auditoria is still subject to research due to a lack of measurement experience. In this paper a step is taken to determine the reliability of IACC measurements. In order to limit the multitude of factors that might have an influence on IACC results, a concise focus is placed on the alignment accuracy of the receiver (artificial head) with the sound source. In a first step extensive measurements have been carried out to obtain empirical data that show the influence of receiver misalignment. In a second step these data are used in Monte Carlo Simulations to identify measurement errors and uncertainties according to the GUM framework (Guide to the expression of Uncertainties in Measurements) and its supplement 1 (Monte Carlo Simulations). The presented results will show how sensitive IACC is concerning the discussed uncertainty factor.
Graduate Program in Architectural Acoustics, Rensselaer Polytechnic Institute, Troy New York, USA
ABSTRACT
Recent studies on acoustically coupled volumes, worship spaces, and in concert hall acoustics, have prompted an in-creasing interest in analyzing sound energy decays consisting of more than one decay slope, so-called non-exponential decays. It has been considered very challenging to estimate parameters associated with double-slope decay characteris-tics, even more challenging when the coupled-volume systems contain more than two decay processes. To meet the need of characterizing energy decays of multiple decay processes, this paper reports investigations using both acoustical scale-models and numerical models of three coupled volumes. Characterization is based on Bayesian probabilistic inference. Acoustic scale models, and diffusion-equation based models are used to evaluate the estimation strategy, and to validate the results. The analysis method is then applied to geometric-acoustics models of concert halls with more complex geometries. The analysis method within Bayesian framework is capable of determining more than two decay slopes and estimating the corresponding decay parameters.
(1) Department of Architecture and Mechatronics, Faculty of Engineering, Oita University, 700 Dannoharu, Oita 870-1192, Japan (2) Department of Mechanics, Faculty of Mechanical and Manufacturing Engineering, Universiti Tun Hussein Onn Malaysia (UTHM), Parit Raja, Batu Pahat, 86400, Johor, Malaysia
ABSTRACT
The purpose of this study is to develop an Artificial Neural Network (ANN) model for predicting reverberation times in classrooms. In order to develop the model, more than 700 samples of room acoustics simulation based on the finite element method are conducted. The simulation system is developed at Oita University as "Large-scale finite element sound field analysis (LsFE-SFA)" and its accuracy has been confirmed in previous papers. With the LsFE-SFA, the sound fields in one classroom at Oita University are analyzed by changing their absorption conditions. Classroom elements like floor, ceiling, wall, window, furniture and door are taken into account in the analyses, and one-octave-band-pass-filtered impulse responses 500 Hz octave band are simulated at several receiving points in the classrooms. The simulated results are provided as training database into the learning process of ANN. In the process, back propagation with Levenberg-Marquardt training algorithm is employed. To confirm the validity of the trained ANN at actual classroom, three conditions of classroom are created; A. original classroom; B. tiled carpet attached on the door; C. tiled carpet attached on the window. Then, these conditions are measured using Time-Stretched-Pulse-method to obtain reverberation times. The results are compared with the output of ANN and FEM. Acceptable agreement is found at ANN with MSEmea,ANN= 2.09x10^-3, while sufficient results is from FEM with MSEmea,FEM is 5.4x10^-3. In this study can be said the ANN model able predict reverberation times within 1s on standard PC and the developed ANN is expected to be useful for practical usages.
Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing, China
ABSTRACT
Based on wave propagating theory of multi-layered medium and the optimizing algorithm, the complex elastic modulus of viscoelastic materials are optimized with different physical conditions to improve material absorption performance. Isoclines of absorption coefficient on complex elastic modulus of absorption materials are presented with certain boundary conditions. Assuming the absorption coefficient is larger than 0.8, the scope of elastic modulus and loss factor of the viscoelastic materials with different boundary conditions are given and discussed. The results show that the sound absorption performance can be improved effectively by adjusting complex elastic modulus of viscoelastic materials. The scope of elastic modulus is found to be very sensitive to the boundary conditions while the absorption coefficient is larger than 0.8. The difficulty of adjusting complex elastic modulus can be reduced with cer-tain steel backing, but the absorption performance of viscoelastic materials become worse with water backing.
Engineering Acoustics Division, Lund University, Sweden
ABSTRACT
Lightweight constructions made of timber material have a number of advantages; they could become cost effective in future and demand relatively short production duration. One of the main drawbacks of lightweight structures is related to sound transmission and vibrations. The differences in weight, density, stiffness and repartition compared to traditional materials have repercussions on how the sound propagates in the rooms and in the structures themselves. Sound and vibration transmissions become an increasing nuisance. In order to be able to reduce theses transmissions, a better understanding of how the sound propagates through a real wood cross junction and between floors is needed. The multi-family house in this study has eight storeys and contains 34 apartments. While the ground floor is cast in concrete, all the seven floors above are made of wood, what makes this building a perfect object of study for wood building elements.
In this work, it was focused solely on the propagation of sound and vibration from one room on the first floor to the adjacent room on the same floor and to the two rooms above. The investigation was further extended to comparing the former results to the transmission taking place between the fourth and fifth floors. This investigation also included measurements of induced walking vibrations with real human walking and the mobility; those were performed on a wooden floor inside the timber building. The measurements of accelerations induced by a walking person were used to evaluate existing vibration criteria.
The studies that have been led put in evidence the existence of complex phenomena taking place in lightweight buildings and confirm that current evaluation methods for the acoustic quality of lightweight constructions are not adapted to those structures. Thus, a reevaluation of the methods is needed, in order to cope with the increasing demand for lightweight constructions, and in order to avoid conception mistakes that would degrade their future reputation. This is exactly the main objective of the Swedish project AkuLite; develop new objective measures of assessing the acoustic and vibration quality, with the expected result that the experienced sound, vibration and springiness are not dependent of structural bearing system in the building any more.
Institute for Physics, Oldenburg University, Germany
ABSTRACT
Panels and walls within cabins of aircrafts and some other transportation means should have a low mass per area, which obviously leads to lack of transmission loss (TL). One way to improve the TL is to enhance the stiffness of the element, provided it is tightly fastened at its border strip. Experiments are reported by starting with usual honey-comb and similar air-tight panels, to which an appropriate light material of coarse-grained structure is added at one side of the surface. This structure is covered with a thin foil. The foil is stiffened by evacuating the added surface material, which leads to considerably stiffening of the complete device. An improvement of TL of 30 to 40 dB is achieved at low frequencies, depending on the TL of the untreated material. Constructions based on this principle are presented with measurements of the TL. The improvements and disadvantages with respect to resonant and coincident frequencies are discussed.
University of Perugia, Italy
ABSTRACT
Sound insulation influences building quality and value. The present paper shows the acoustic performance assessment of a modern residential building which provides high levels of energy saving. Airborne and impact sound insulation and noise due to service equipment were measured by in situ tests in the case study. Besides the necessity to verify the respect of limits fixed by standards to protect building inhabitants, the question consists in computing the impact of acoustic performance in the assessment of sustainability and quality of indoor spaces. The paper applies to examined residence both the method of total quality certification BGP (Building Global Performance, based on Italian legislation, integration of a model originally elaborated on an existing low performance flat) and the certification of building environmental sustainability adopted by Umbria Region. The present paper aims at studying the relationship between acoustic well-being and environment-comfort standards. As a result, a contribution is provided to integrate the assessment of factors having an impact on building quality and sustainable development.
(1) RMIT University and CSIRO Materials Science and Engineering, Melbourne, Australia (2) CSTB Département Acoustique et Éclairage, France
ABSTRACT
The effect of the resilience of the steel studs on the sound insulation of steel stud cavity walls can be modelled as an equivalent translational stiffness in simple models for predicting the sound insulation of walls. Numerical calculations by Poblet-Puig have shown that this equivalent translational stiffness varies with frequency. Vigran derived a best-fit third order polynomial approximation to the logarithm of these numerical values as a function of the logarithm of the frequency for the most common type of steel stud. This paper uses an inverse experimental technique. It determines the values of the equivalent translational stiffness of steel studs which make Davy's sound insulation theory agree best with experimental sound insulation data from the National Research Council of Canada (NRCC) for 126 steel stud cavity walls with gypsum plasterboard on each side of the steel studs and sound absorbing material in the wall cavity. These values are approximately constant as a function of frequency up to 400 Hz. Above 400 Hz they increase approximately as a non-integer power of the frequency. The equivalent translational stiffness also depends on the mass per unit surface area of the cladding on each side of the steel studs and on the width of the steel studs. Above 400 Hz, this stiffness also depends on the stud spacing. The equivalent translational stiffness of steel studs determined in this paper and the best-fit approximation to that data are compared with that determined numerically by Poblet-Puig and with Vigran's best-fit approximation as a function of frequency. The best-fit approximation to the inversely experimentally determined values of equivalent translational stiffness are used with Davy's sound insulation prediction model to predict the sound insulation of steel stud cavity walls whose sound insulation has been determined experimentally.
Afdeling Akoestiek en Thermische Fysica - Afdeling Bouwfysica, K.U.Leuven, Celestijnenlaan 200 D, B-3001 Heverlee, Belgium
ABSTRACT
In building acoustical laboratories, the sound transmission loss of structures is typically measured by placing the structure in an aperture between two reverberant rooms. It is known that the location of the specimen in the aperture can affect the results due to the niche - or tunneling - effect. In this paper, a Wave Based Model is used to numerically investigate the tunneling effect in sound transmission loss determination of single and double walls. The field variables (plate displacements and sound pressures) are expanded in terms of structural and acoustic wave functions. The model is validated with experimental results of lightweight single walls. A parametric study for single and double glazing shows that the position of the wall in the opening can significantly influence sound transmission loss below coincidence. As for single walls, the sound transmission loss of double walls is minimal when placed in the center of the niche opening and maximal for the edge positions. The difference, however, is greater for double walls in the mid-frequency range, where sound transmission is highly dependent on the angle of incidence.
Brekke & Strand akustikk as, Oslo, Norway
ABSTRACT
In our fieldwork, we have measured sound insulation of lightweight partition walls with separate steel studs for years. The results are stored in our database, which contains more than 350 measurements of lightweight partition walls. In this study, 50 measurements and two wall configurations have been selected. The first configuration consists of 3 x 13 mm gypsum board on each side and a 250 mm void partly filled with mineral wool. The second configuration is identical except that it consists of 2 x 13 mm gypsum boards on each side and a 200 mm void. These measurements are made primarily at various performing arts centers around Norway. These rooms have a "box-in-box" solution, but measurements from other locations are also present. Measurements with obvious flanking transmissions were excluded from the study. We present some case studies where we describe the room configuration and compare the predicted values to the measured results. Our prediction is based on equations in the European Standard EN 12354-1 and well-known empirical models, with empirical corrections from the measurements. Articles have also provided us with both guidance on how to calculate the sound insulation and simplified methods for handling uncertainties and safety margins.
New Zealand Forest Research Institute, Rotorua, New Zealand
ABSTRACT
Over the years timber-framed floor system designs have sometimes included some sort of granular material infill in order to reduce sound transmission between tenancies. Historically this has been in the form of some readily-available, low-cost material (e.g. ash, scoria, and sand). Recent research has been conducted into timber-framed floor toppings which contain a granular material infill in the form of a sand and sawdust mixture. The sand and sawdust mixture increases the mass of the floor, which improves the low-frequency impact insulation performance. This sand and sawdust infill also greatly increases the vibration damping in the upper part of the floor, improving the mid to high-frequency sound insulation performance, while also making the system robust to construction defects. This paper presents results of low-frequency impact insulation and flanking transmission measurements of timber-framed floors which have a sand and sawdust mixture in the floor topping.
(1) LVA, INSA Lyon, Lyon University, Villeurbanne, France (2) ACDC 34, Mauguio, France
ABSTRACT
The design of light panels with high acoustic insulation is of large interest in transportation and building industry. The main possibility to reach this goal is the use of double walls with skins panels of high damping (made of sandwich steel-polymer-steel material) and absorbing material in the gap between walls. The main difficulty for having a high insulation is due to the necessary mechanical links between skins panels that produce mechanical transmission from one skin to the other.
To design such panels we developed a theoretical model based on the patch mobility technique that permits us to calculate sound transmission from the characterization of independent subsystems. A strong numerical advantage of the approach is the representation of the excitation acoustic field with blocked patch pressure instead of non correlated plane waves. The principle of the method for calculating the double wall transmission loss, is first presented and then applied to the case of interest. Results are presented to illustrate the field of application of the model and to discuss the physical influence of mechanical links, skin plate damping and absorbing material.
(1) Chungbuk National University, Korea (2) Korea Institute of Construction Technology, Korea (3) Chungbuk National University, Korea
ABSTRACT
The present study aims to reduce the floor impact noises of multi-story housing using latex polymer modified cement mortar. A method of construction to reduce both light-weight and heavy-weight impact noises was sought. In order to achieve the noise reduction efficiency, the structure was designed to substitute the mortar layer, the closest to the impact source, as latex modified polymer mortar which can directly attenuates floor impact noise and vibration. Since mortar must has a standard of strength, series of material tests were performed to characterize the material properties using different mortar specimen mixture with 0 %, 5 %, 7 % and 9% SBR latex. The optimum mixing ratio was determined by material tests and after all 7 % SBR latex modified mortar and 5% of low Tg latex mortar were prepared to investigate the effect of SBR latex on floor impact noise reduction.
Acoustic tests were undertaken to four specimens with different latex polymer ingredient. Results show that while specimen 2 (i.e. latex-modified mortar laminated) shows better noise reduction performance for light-weight impact noise, specimen 3 & 4 (i.e. low Tg latex polymer mortar laminated) have effects of noise reduction for heavy-weight impact noises. Remarkable reduction was occured when rubber powder was mixed with low Tg latex polymer (i.e. specimen 4) with the sound source used as impact ball. Li,Fmax,Aw was 37dB while general floor system has 45dB. The light-weight and heavy-weight impact tests demonstrated that the SBR latex-modified mortar generally gives better noise reduction characteristics than the unmodified mortar over the full range of frequencies and the benefit become outstanding as the frequency reaches above 125 Hz.
ÅF, Goteborg, Sweden Chalmers University of Technology, Sweden
ABSTRACT
A three year research programme has recently started in Sweden, aiming at improving the mutual connection between the perceived sound, vibration and springiness and their corresponding measured values in lightweight structures. The main goal is to describe new objective measures of assessing the acoustic quality, with the expected result that the experienced sound, vibration and springiness are not dependent of structural bearing system in the building any more. The consequence of new methods will be that various structural systems within one certain sound class in a classifi-cation scheme will provide fairly equal evaluation with regard to subjective response. The research programme, AkuLite, is divided into seven work packages (WP). Initial results from one work package (WP 4), related to current subjective and objective field data are presented in this paper. The aim of topical part of the study is to investigate the liability of measurements results and evaluation procedure when those are carried out in accordance to ISO 140 and ISO 717. It involves an initial inventory and analysis from objective measurements, according to ISO 140, performed on light weight structures on the field by various consultants in Sweden. The study considers principal problems with current standards, affecting each operator performing field measurements in light weight structures and thereby impacting the final result quality. Typically, the measured sound pressure level and the reverberation time differ a lot in low frequencies, compared to heavy structures. The measurement result (distribution) between various measurement positions is rather random in the low frequency region, i.e. there is no typical pattern for light weight structures in general. The complexity of different light weight structural bearing systems and their sensitivity in the low frequency range requires a more rigid description of the measurement and evaluation procedure. The lack of objective sound and vibration data below 50 Hz is also a problem since subjective disturbance often emanates from this frequency range.
(1) Industrial Research Ltd., Auckland, New Zealand (2) Acoustics Research Centre, The University of Auckland, Auckland, New Zealand
ABSTRACT
The increase in population worldwide has highlighted the inadequacies of sound insulation in buildings. The problem is particularly evident in medium-high density housing situations, which are projected to become 30% of Auckland's housing by 2050. This will have implications on occupants' health, productivity and quality of life. Prevention of sound transmission through walls and ceilings in the lower frequency range of human hearing is particularly important, but is a difficult problem. This problem provides an opportunity to ask the question: Can we design an acoustic insulation system that provides improved sound insulation performance over a conventional system with an equivalent mass density, within this frequency range?
This paper outlines an investigation into novel meta-materials known as Locally Resonant Structures. These structures can exhibit acoustic band gaps, or frequency ranges of unusually low sound transmission. One-dimensional mathematical models are used in conjunction with finite element analysis (FEA) to develop various locally resonant element concepts functional below 1kHz. Dynamic and impedance tube testing is then used to experimentally verify the performance of the elements through comparisons with modeling data. A performance index has been developed to provide a method for comparing resonator elements and aid in design optimization. Various resonator elements have shown a peak effective mass up to fifty times greater than their rest mass. Locally resonant structures have increased peak transmission losses by as much as 40dB over that of a non-resonant structure of equivalent area density within the designated frequency range. These resonators can be distributed throughout the wall structure on a scale shorter than the wavelength of structural vibrations in the wall matrix. The resulting system has the potential to provide significantly higher transmission loss at low frequencies than conventional wall systems of similar size and weight. The longer term goal is to determine an effective design of local resonator that can be incorporated into a practical insulation system.
(1) Building Research Institute, Japan (2) Hokkaido Northern Regional Building Research Institute, Japan (3) General Building Research Corporation of Japan, Japan (4) National Institute of Advanced Industrial Science and Technology, Japan
ABSTRACT
The acoustic environment performance, especially floor impact sound insulation performance, is one of the important performances in the apartment houses. The heavy-weight floor impact sound becomes the problem more often than the light-weight floor impact sound in Japan. Moreover, the floor impact sound insulation performance of the wood-frame constructions is low compared with that of the concrete constructions. Therefore, we have studied the floor impact sound insulation performance of the wood-frame constructions. The floor impact sound insulation performance depends on specification of separating floor. This paper presents the effect of resilient channel and floating floor (dry double system floor) on floor impact sound insulation performance. The resilient cannels are little used for the wooden constructions and the floating floors are usually used for concrete constructions in Japan. The specifications with the resilient channels which were effective for the floor impact sound insulation were investigated. Furthermore, the reductions of transmitted impact sound of the floating floors were measured in accordance with ISO 140-11 or JIS A 1440-1 and -2. The reference floors are standardized lightweight floors (wooden constructions) in ISO 140-11 and concrete construction floors in JIS A 1440-1 and -2. Measured results indicate that the floating floors are effective in improving the floor impact sound insulation in the wood-frame constructions.
(1) Centre Technique de Matériaux Naturels de Construction (CTMNC), Clamart, France (2) Laboratoire PHASE, Toulouse, France (3) CRIR, Rantigny, France (4) Centre Scientifique et Technique du Bâtiment (CSTB), Saint-Martin-d'Hères, France
ABSTRACT
Reducing energy consumption of buildings is of major environmental interest. In this context, brick and tile producers are investigating more efficient materials responding to the new regulation which is stricter in terms of isolation. As a result, hollow bricks have been designed to satisfy the requirements in these matters. However, because of their large thickness, inhomogeneity and anisotropy, the usual laws of building acoustics used for the computation of sound transmission are not satisfying in the case of these building elements. This work consists in designing a new physical model to understand and predict their acoustic properties in the audible range [100Hz-5000Hz]. Because of the complex structures involved, a simplified approach has been adopted: an hybrid method coupling analytical modelling for the transmission loss of a finite and multi-layered anisotropic thick plate with the Finite Element Method (FEM) to homogenize the hollow block has been performed. The good agreement obtained between predictions and measurements allow us to underline the physical phenomena responsible for the sound transmission through these partitions. After all, a parametric study is lead in order to point out which physical parameters are relevant to improve the sound insulation of such materials.
(1) Fire Insurers Laboratories of Korea (2) Doosan Engineering & Construction Co. Ltd., Seoul, Korea (3) University of Seoul, Korea
ABSTRACT
Floor impact sound is one of serious noise source in residential building. Most of heating system in Korean residential building is floor heating system which is called Ondol. Korean floor heating floor system usually consists of finishing material, mortar with heating coil, light-weight aerated concrete and reinforced concrete. In this study, for the isolation of heavy-weight impact sound, mortar and light-weight aerated concrete was modified. Glass foam aggregate was added on light-weight aerated concrete. Also, water-cement ratio and amount of cement of mortar were changed. Heavy-weight impact sound pressure level was measured in reverberation chamber using bang-machine and impact ball. Size of specimen was 1 m by 1 m. Substitution ratio of glass foam aggregate on light-weight aerated concrete shows relationship with heavy-weight impact sound pressure level. In addition, heavy-weight impact sound pressure level was decreased with increment of water-cement ratio and amount of cement on mortar.
(1) Korea Institute of Construction Technology, Korea (2) Chungbuk National University, Korea
ABSTRACT
Bang machine has been considered to have problems about not only the impact force and frequency response which are different from the real impact sources such as children's jumping and running, but also damage in the wooden structure housing. Therefore, a new impact sound source for lower impact force to prevent damage in wooden structure housing was developed. The impact ball was adopted as the second standard impact source in JIS A 1418-24) and ISO 140-115). In the present study, floor impact sounds generated by impact ball with the different drop heights in four floors of mock-up wooden buildings and a concrete mock-up facility were investigated and also compared to jumping sound. The results show that L-index range was recorded as L-45 to L-65. Impact ball sound dropped at 10 cm to 30 cm was most similar to jumping sound. And the impact sound levels at 250 and 500 Hz were more sensitive to drop height than other lower frequencies. And it was shown that room mode was affected at the 120cm of microphone height. Also, it was revealed that the difference of the impact noise caused by the inaccurate manipulation of impact ball height was within 1dB.
(1) Veneklasen Associates, Santa Monica, California, USA (2) Paul S. Veneklasen Research Foundation, Santa Monica, California, USA
ABSTRACT
The use of resilient channel in stud-framed walls in multi-family residential buildings is common in North America, allowing Building Code requirements to be met with single stud wall construction. There is considerable evidence that the brand and model of resilient channel has significant effect on acoustical performance, as do numerous installation errors such as short-circuits. Unfortunately most of the evidence is anecdotal and there has been limited systematic study. The authors previously published preliminary results of a laboratory testing program (Proceedings of Internoise 2009) that systematically isolated and quantified the acoustical effects of channel brand and model, and of several installation errors. A second testing program has recently been completed with additional brands and models of channel and additional installation errors. This paper summarizes the results of the testing programs, which provide valuable quantitative data of the effect of difference in resilient channel and installation on acoustical performance.
University of Canterbury, Christchurch, New Zealand
ABSTRACT
The transmission of noise from the outside environment into dwellings is often a concern for the inhabitants. However, the transmission of the noise through the roof is often overlooked when the sound insulation of the dwelling is being assessed unless the dwelling is located near an airport. The transmission of noise through the roof system depends not only on the performance of the roof cladding, but also on the structure-borne noise attenuation of the trusses, the ceiling and the ceiling insulation. In this investigation, the sound insulation of different configurations of roofing systems were evaluated in the laboratory. The configurations tested included variations in the cladding, the sarking installed under the cladding, the thickness of the insulation installed between the ceiling joists and the ceiling construction. The outcome of the study will help to improve the acoustic performance of roofing systems as well as to assist architects in the selection of roofing systems.
School of Civil Engineering and Architecture and Urban Design, Department of Architecture and Building, State University of Campinas-UNICAMP-Campinas, Brazil
ABSTRACT
Lately there is a growing concern for energy efficient policies to combat thermal discomfort in hot humid climate regions. Many of these solutions are thermally adequate, but acoustically unfavorable. The use of openings in buildings of this climate is known as an important strategy of passive design to make use of natural ventilation. The use of ventilated sill is an architectural tool and applies to this bioclimatic integration. Bittencourt describes it as a device typically run on concrete, in a format generally inverted "L", superimposed on an opening at the sill below the windows, which aims to act as an additional source of air movement provided by the openings. The objective of this work is to show the influence of different materials in the noise performance of the sills in ventilated rooms with natural acclimatization. The methodology of the study is to assess the acoustic performance of the ventilated sill in different façades as: a facade fully closed; façade with a simple opening without ventilated sill and four cases with the same opening using a four different ventilated sills run over with a reflector material (concrete and granite) and others with more absorbing material (sheet metal and wood with rock wool). The results showed an improvement in the performance of acoustic environments as they were used shell elements to the more absorbing. It was observed that to be manipulated inner surface of the sill ventilated façade, the acoustic conditions inside the buildings also showed improvement.
(1) ACUSTICA PARATI&CO, Acoustical Consulting, Crema, Italy (2) ITC-CNR, Construction Technologies, Institute of Italian National Research Council, Milano, Italy
ABSTRACT
The study concerned the evaluation of sound insulation of light structures, such as wooden roof. A first analysis was focused on laboratory evaluations of different sequences of layers, obtained with both common and innovative materials. Based on the results obtained with the laboratory tests on the acoustic behaviour of the insulating configuration of wooden roof used; the second step of the research consisted in studying an "ad hoc" outdoor full-scale test cell on which a second set of measurements was performed. The comparison between laboratory results and those obtained from the outdoor test cell showed a remarkable difference in sound insulation between the two conditions considered. Moreover, measurements were carried out with and without roof tiles, and it emerged that the tiles heavily affect the sound insulation of the entire roof. The final step of the research involved the execution of in situ measurements of wooden roofs in real buildings, using two of the insulating configuration of wooden roof tested. The results of the comparison between laboratory, external cell and in situ measurements are shown.
Acoustics Section, National Physical Laboratory, New Delhi, India
ABSTRACT
This paper presents the basic factors that control the sound transmission in lightweight wall design structures for residential and commertial buildings. An understanding of the basic principles can lead to design economies and the avoidance of error. Small changes in the arrangement of materials can yield large changes in noise control with little or no increase in cost. The sound transmission losses in single layer wall and cavity walls are determined by the physical properties (i.e., mass law, coincidence dip, mass-air-mass resonance) of the lightweight building wall structures and the methods of assembly structure. Examples of new wall stucture design arrangements with high sound tranmission class (STC value) using these factors are also discussed in this work.
Luleå University of Technology, Luleå, Sweeden
ABSTRACT
Variations in sound insulation are a problem for lightweight constructions, since it demands a high safety margin to the legal requirements on acoustical performance, if the variations are large. The building costs can be lowered if these variations can be characterised and identified. This paper describes an investigation of how the vibro-acoustical properties of nominally identical dwellings change during the construction phases. The objective is to find out whether acoustical deviations in the field can be traced to the earlier stages of construction. It also gives an indication of how the variations grow during the process. Throughout the investigation, all measurements were made on the same building elements. The building technique under study is a lightweight timber system consisting of industrially produced prefabricated volumes. Acceleration level measurements have been performed in the factory on building elements at different stages of completion; plates attached to beams, floor with gypsum board covering, the whole volume without floor parquet and the finished volume. An ISO tapping machine was used as excitation source and accelerometers were placed along the edges of the floors and across the surface. Field measurements were performed in the finished building. In addition to the analysis of acceleration level, airborne and impact sound insulation were measured in situ. Acoustical deviations were found for frequencies above 400 Hz, but these could not be traced back to the earlier construction stages.
(1) Hebrew University-Hadassah Medical School, Jerusalem, Israel (2) Artann Laboratories, Inc., NJ, USA.
ABSTRACT
An innovative technology based on the use of ultrasonic cylindrical standing waves for continuous monitoring of quality of various liquid food products, such as milk, juices, beer, wine, and drinking water is described. A proprietary unique feature of the developed ultrasonic analyzer is that it employs a combined mode of operation using both high-intensity and low-frequency (10 Wcm(2), 1 MHz) waves for separation and concentration of the high-molecular-weight particles (fat globules or cells) and low-intensity and high-frequency (0.5 Wcm(2), 10 MHz) waves for compositional analysis. High accuracy for ultrasound velocity measurements (up to 0.001%) and ultrasound attenuation (of about 1%) and rapid testing time (2-20 s) have been achieved. Comparative analyses of the ultrasonic method with standard reference techniques have produced linear calibration curves for major components with correlation coefficients higher than 0.95. It is thus possible to monitor total protein and fat content, and somatic cell count in raw milk in cowsheds, or salinity, turbidity, specific gravity, and particles (bacteria) in drinking water directly. Advantages of the proposed technology include the reagent-free nature, no need for sample pretreatment, ease-of-use, and low cost.
Tokyo Institute of Technology, Tokyo, Japan
ABSTRACT
In most aerial power ultrasound applications, the wave-length is short and the intense field exists in a small space. A method for measuring such ultrasonic fields using a fiber optic probe has been proposed, where the modulation of optical reflectivity at the end of the optical fiber provides the absolute value of sound pressure through a change in the refractive index of air. In this report, the absolute sensitivity of the probe is discussed theoretically and experimentally. The effects of humidity, temperature and atmospheric pressure are discussed and the correction formula is presented for the absolute sound pressure. The sensitivity constant of the reflection coefficient varies only 0.5% for the usual ranges of humidity and temperature while it changes approximately 4% for the atmospheric pressure range from 980 to 1010 hPa. On the other hand, a comparison with commercial probe-type condenser microphone is carried out. A broadband light is guided through a single-mode fiber which is inserted in the sound field to be measured and reflected at the end. The reflected light is returned through the same fiber, and detected by a GaAs photodiode via a circulator. The output is monitored using a lock-in-amplifier. A standing wave field between the end of the Langevin transducer and a reflector is used for the experiment. As a result, the sound pressure measured by the optical fiber probe were +8~14% higher than that with B&K 4138 for the 18-kHz standing wave field in air. The experimental results for much wider sound pressure range and at various frequencies will be reported.
(1) Shibaura Institute of Technology, Japan (2) Tokyo Metropolitan University, Tokyo, Japan
ABSTRACT
Magnetic resonance imaging (MRI) equipment is important in medical diagnosis. The equipment utilizes a technology of tomography by the nuclear magnetic resonance phenomenon for imaging. Driving sound of MRI equipment is loud and it is caused by using the gradient magnetic field controlled by imaging sequence. It generates loud sound which depends on imaging sequence. The sound pressure level of the driving sound is over 100 dB sometimes and it makes the ear-protector necessary. There are some esearches of the driving sound. In this paper, this paper shows the measurement result of equivalent continuous A-weighted sound pressure level of distribution in the three tesla MRI equipment which is Philip Achieva 3.0T X-series. Number of measurement point is 21 on the table; the 15 points located inside of the bore and the 6 points located outside of the bore. Imaging sequences of the measurement sound are slice positioning, T2 weighted imaging (T2W) and Echo planar imaging (EPI). The measurement data length is 30.04 seconds. The result showed that the driving sound of each imaging sequences on the table was loud. The maximum equivalent continuous A-weighted sound pressure level (LAeq) of slice positioning, T2W and EPI were 114.7 dB, 106.0 dB, 114.3 dB respectively. These values were larger than driving sound of 1.5 tesla MRI equipment. The maximum value of instantaneous sound pressure was 123.9 dB in the three sequences of the 3 tesla MRI equipment. So, it shows the instantaneous value is important parameter.
School of Physics, UNSW, Sydney, Australia ResMed, Bella Vista, NSW, Australia
ABSTRACT
Standing waves, resonances and/or singularities during measurement and calibration often limit the precision of measurements of acoustic reflection spectra and acoustic impedance spectra. This paper reviews and compares several established techniques, and then describes techniques that incorporate some or all of three features that together considerably improve precision and signal:noise ratio. The first feature is to minimise problems due to resonances by calibrating the apparatus using up to three different acoustic reference impedances that do not themselves exhibit resonances. The second involves using multiple pressure transducers to reduce the effects of measurement singularities. The third involves shaping the spectral envelope of the stimulus signal. Here, the envelope is adjusted iteratively to control the distribution of errors across the particular measured impedance spectrum. The most useful non-resonant load is the acoustically infinite waveguide, whose impedance is real and independent of frequency. We describe the performance of different approximations to the infinite waveguide including infinity in a box', a portable calibration load.
Physikalisch-Technische Bundesanstalt, Braunschweig, Germany
ABSTRACT
Many applications in objective audiometry measure and evaluate responses of the hearing system to acoustic short-term stimuli. One of the earliest and best known among these signals of short duration is the 'reference pulse' specified in standard IEC 60645-3. This definition of an electrical reference signal dates back to times when audiometric equipment essentially consisted of analogue components used in laboratory set-ups. Therefore, overshoot caused by the limited bandwidth of the electrical signal was not considered in this definition. Modern equipment, however, often uses audio-frequency DA converters which imply a limited bandwidth of the signal path. Even if their sample rate is chosen in a way that the physiological and psychoacoustic effects of the pulse do not differ from those of the quasi-non-band-limited pulse, the transients caused by DA conversion and bandwidth limitation lead to the fact that the time response of the pulse does not comply with the current IEC specification. In order to resolve this problem, a modified definition of the reference pulse by detailed time-domain tolerance diagrams is discussed.
Meanwhile, in addition to the 'signals of short duration' specified in IEC 60645-3, a variety of short-term stimuli with considerably different temporal characteristics is available for audiometry. ISO 389-9 explicitly allows the application of these signals, provided that they are clearly specified by the manufacturer. The concept of expressing their reference thresholds by means of peak-to-peak Equivalent Reference Equivalent Threshold Sound Pressure Levels (peRETSPLs) is easy to implement, even with rather simple instrumentation. However, this concept results, for some stimuli, in calibration values which do not at all correlate with either the behavioral hearing thresholds or the spectral energy of the signals. Attempts at improving the calibration procedure with respect to these 'inconsistencies' are sketched out in this paper.
Federal University of Santa Catarina, Brazil
ABSTRACT
Beamforming is an acoustic imaging technique that can estimate the radiation pattern of single or complex sound sources and produce a map of the results. The pass-by noise test is a standardized test that aims to evaluate the overall noise of vehicle's sideline.
By coupling the idea of pass-by test with the extension of beamforming technique to moving sources provides the access to the recognition of sound sources produced by vehicle movements, for example, rolling tyres, engines and exhaust systems.
The present paper aims to describe a low cost system to apply beamforming technique to pass-by noise test. The system is based on the use of low cost electret microphones mounted in a metallic array which are connected by a coaxial cable to the acquisition system.
Later in this document in the application section the results of beamforming maps of pass-by noise test can be viewed in more detail.
Brüel & Kjær, Nærum, Denmark
ABSTRACT
Only a small percentage of all acoustical measurements are performed in the well-defined and well-controlled environment of a calibration laboratory - on the contrary most acoustical measurements are done under non-controlled conditions which in many cases are not even known in beforehand. This is the reason that some acoustical standards such as the IEC 61672 series (the "Sound Level Meter standard") specify the performance of the measuring microphone over a wide range of environmental conditions. Modern quality measuring condenser microphones often meet or exceed the requirements even under very varying conditions. However one important - and unfortunately in many case major - source of error is often neglected: The response of the actual microphone type in the actual sound field.
The influence of different sound fields on the measurement error is discussed in some detail with practical examples and it is shown how a worst-case error exceeding 10 dB @ 20 kHz is a real risk. After a brief discussion of a condenser microphone design rules it is shown how the use of new technology has made it possible to develop a new condenser microphone which drastically reduces the error caused by influence of an unknown sound field or varying angle of incidence. Finally, test results from production samples of the new microphone are shown.
(1) Laboratorio de Ultrasonidos, Departamento de Física, Universidad de Santiago de Chile, Chile (2) Department of Mechanical Engineering, University of So Paulo, So Carlos, Brazil
ABSTRACT
From its discovery, the Acoustic Emission phenomena (AE) have been receiving great attention. The AE was applied in the study of potential catastrophic failure in metal devices, for instance in gas containers, the signals obtained in these devices are rather clear and several flaw localization algorithms were developed and tested, obtaining good agreement between the predicted localization and the actual position of the cracks. A second development was reached with the discovering of Kaiser effect, this effect allows, at last in theory, the determination of load history of materials. In rocks the AE phenomena receive recent attention because in the mining industry there is interest in the dynamic localization of cracks produced by hydrofracture process, as it is well known the hydrofracture is a technique to produce pre-fracture on a medium putting high pressure liquid in a cylindrical and deep borehole dug in the material body.
A research using the AE phenomena to the dynamical localization of cracks propagating during a hydrofracture process, are conducting in the Ultrasonic Laboratory of Universidad de Santiago de Chile. A new localization algorithm was developed using dynamical measurement of ultrasonic propagation velocity in the rocks, the algorithm works well allowing the determination crack position in laboratory samples with precision higher than the conventional method. However, it is difficult to use the method at industrial scale because the signals are too much weak in an environment too much noisy, in addition, it is difficult to mount a number enough of sensors to produce a precise localization of a fissure propagating in a rock during a hydrofracture process. In this paper a new approach to the problem will be presented, the idea is to consider the localization of the flaw as an inverse problem. The inverse problem technique consists of the determination of source characteristics, analyzing the signals coming from it. In classic version the source characteristics are established analyzing the dispersed radiation from it, this needs the known of Green function of the medium. Because it's inhomogeneous character is very difficult to write a Green function for rocks. A non-classic inverse localization algorithm was developed; the first version of algorithm was bi-dimensional obtaining goods results. Unfortunately the algorithm do not converges for three-dimensional version. In a new approach a new localization method was developed adding a genetic algorithm. The first tests show that the algorithm of localization method converges well and the localization is possible.
(1) Department of Solid State Physics, University of Siegen, Siegen, Germany (2) Institute of Experimental Physics II, University of Leipzig, Leipzig, Germany
ABSTRACT
In physical acoustics, visualization of individual acoustic wave fronts on piezoelectric crystal is one of fundamental problems. Excitation of ultrasound in the Coulomb field of scanned electrically conductive spherical local probes and similar detection have been employed for imaging of the transport properties of acoustic waves in piezoelectric materials including single crystalline wafers. The excitation in the field of a local probe allows generation and detection of acoustic waves with a spatial resolution exceeding the diffraction limit. The temporal resolution is not limited by resonances as present in disc type transducers and periodically structured inter digital surface acoustic wave transducers. For generation and detection of surface acoustic waves two Coulomb probes have been positioned on a planar surface of piezoelectric crystals with one of the probes scanned in two dimensions to record spatial dependencies. Wide band operation as well as narrow band quadrature detection schemes has been applied for sensitive detection of the propagating waves. The method has been applied to image propagating surface acoustic waves in two dimensions. The generation and detection scheme as well as the numerical modeling are demonstrated and applications are exemplified.
(1) Eindhoven University of Technology, Eindhoven, The Netherlands (2) Level Acoustics, Eindhoven, The Netherlands (3) Acoustics Engineering, Boxmeer, The Netherlands (4) Gade & Mortensen Akustik A/S, Charlottenlund, Denmark
ABSTRACT
The sound strength G is a room acoustical parameter used to investigate the sound distribution in a hall or to compare the loudness between different halls. ISO 3382-1 describes several methods to measure G. The accuracy of a G measurement depends on the accuracy with which the power level of the sound source can be determined or with which the measurement system can be calibrated. In this research the different available sound strength calibration methods have been compared using a standard omnidirectional (dodecahedron) sound source. Using the same measurement equipment, different (system) calibration methods are compared: 1] free-field measurement in an anechoic room, 2] sound intensity measurement in an anechoic room, 3] diffuse-field measurement in a reverberation room, 4] near-field measurement on stage in a concert hall. For method 1, measurements have been performed in a horizontal plane with white noise and exponential sweeps at various distances from the sound source. For method 2, intensity measurements according to ISO 9614-3 have been performed using white noise while scanning the sound source surface with a two microphone probe. For method 3, the direct method and the comparison method according to the ISO 3741 standard have been used to determine the sound power level using white noise. Also, a system calibration has been performed in the anechoic room and the reverberation room using exponential sweeps. Finally, for method 4, a convenient near-field measurement method at a distance of 1 m has been performed on the stages of a large and a small concert hall using white noise and exponential sweeps. It has been found that the intensity and the diffuse-field calibration methods give substantially equal results. The horizontal-rotation free-field calibration method gives results that differ significantly from those of the diffuse-field and intensity methods. For a survey G measurement in a concert hall it is sufficient to perform an on-site calibration.
NTT Cyber Space Laboratories, NTT Corporation, Japan
ABSTRACT
The direct-to-reverberant energy ratio (DRR) has been focused as a useful information for estimating the distance from a microphone to a speaker. In particular, when the environment is highly reverberant, conventional microphone array techniques failed to correctly estimate the distance. This is because the time and sound level differences of arrival of direct sound between microphones, which are exploited as keys to know the sound source positions, becomes ambiguous due to the existence of reverberation. However, even in such environment, we can still estimate distance from DRR because DRR keeps its one-to-one relation between the distance in reverberant environment.
The most primitive way to know DRR is calculating it from the impulse response between the source and microphone, however, it is quite cumbersome due to prior measurement of the impulse response is required. To overcome this restriction, a method was proposed to estimate DRR directly from the received sounds. It utilized a binaural input signal and estimated the energy of the reverberant component by eliminating the direct component using equalization-cancellation (EC) technique. However, the EC technique loses its accuracy of DRR estimation in highly reverberant environment because it stands on a model that no reverberation component propagates from the same direction of the sound source. On the other hand, we have proposed a DRR estimation method using D/R(direct sound to reverberation) spatial correlation matrix model (we call "DRSC model" hereafter), which consists of the spatial correlation matrices of direct sound and reverberation. The DRSC model assumes that the direct sound propagates only from the direction of sound source but the reverberation arrives from every direction uniformly. Then, we calculate DRR from the power spectra of both components, which are estimated from the correlation matrix of the observed signals. In this contribution, we evaluate the adequacy of DRSC model by using DRR estimation as an evaluation criterion. We first investigate the accuracy of DRR estimation based on DRSC model under various conditions and then discuss the validity of the model. Furthermore, we also compare the results of DRR estimation with conventional method based on EC technique.
Toyama Prefectural University, Toyama, Japan
ABSTRACT
This paper aims at clarifying the discrepancies between actual-ear and artificial-ear responses. The actual- and artificial-ear responses from five models of insert earphones, three models of intra-concha earphones, and two models of headphones were measured and compared. The actual-ear responses were measured for one driver of each earphone/headphone with sixteen ears of eight subjects using a probe-tube microphone ER-7C (Etymotic Research). The artificial-ear responses are measured for four drivers of each ear-phone/headphone using a head and torso simulator (Brüel and Kjær, type 4128C) with a built-in ear-simulator (type 4158C) and a pinna simulator (DZ9763). The results indicate that the actual-ear responses of intra-concha earphones and headphones below 4-5 kHz coincide with the artificial-ear responses and that the actual-ear responses of all earphones and headphones between 6 to 10 kHz are lower by at least 6 dB than the artificial-ear responses. The actual-ear responses of insert earphones below 300 Hz were lower by at least 6 dB than the artificial-ear responses due to acoustic leaks. We highly recommend that earphones and headphones be calibrated before acoustical experiments are conducted, keeping in mind the discrepancies between actual-ear and artificial-ear responses.
(1) Yamagata University, Yonezawa, Yamagata, Japan (2) Asahikawa Medical College, Asahikawa, Hokkaido, Japan
ABSTRACT
We have reported a high speed three dimensional ultrasound imaging system. A combination of coded excitation and synthetic aperture focusing technique enables data collection at a high frame rate and focusing at any depth. The dynamic range of the ultrasound images was approximately 35dB in our system. However, the 50 dB of dynamic range was required for medical imaging. To obtain higher dynamic range, we tried to remove speckle noises by using harmonic imaging and frequency compound techniques. The effect of the tissue harmonic imaging techniques for two dimensional B-mode images have been reported by various authors. It has been shown that the high resolution and wide dynamic range images are obtained with the harmonic imaging. In this study, we examine the efficiencies of tissue harmonic imaging and frequency compound techniques for our coded excitation and synthetic aperture focusing imaging by computer simulation and experiment. We confirmed that obtained images have less speckle noise than the conventional method. We found that the strong harmonics echo was generated at the center axis of the ultrasound transducer array by using our experimental system. It was also observed in results of computer simulations. In our sonication system, it was considered that the peak power of the sound pressure waveform at the center axis of the array was stronger than other directions. Then, the strong harmonics was generated at the center axis of the ultrasonic array probe. We consider that it was caused by the code property adopted in the synthetic aperture imaging system.
(1) Graduate School, The University of Tokyo, Japan (2) Institute of Industrial Science, The University of Tokyo, Japan
ABSTRACT
Swept signals for acoustic measurements are widely used nowadays to obtain impulse responses of the system under test. The overall spectrum and the inverse filter that compresses the sweep into an impulse together with the background noise conditions prescribe the result's signal-noise ratio as a function of frequency. This paper proposes a time-domain sweep synthesis method using composite square and monomial power function modulated sine sweeps that can customize the resulting SNR-frequency function. Theoretical and practical aspects as well as measurement results are presented.
Technical University of Denmark, Copenhagen, Denmark
ABSTRACT
The spatial resolution of a beamformer based on a planar microphone array in a parallel measurement plane can be described by a two-dimensional convolution of the actual distribution of incoherent sources and the beamformer's response to a point source. Several methods are available for deconvolving the resulting blurred picture and thus improving the resulting resolution. This investigation is concerned with a similar deconvolution for the case where the source plane is vertical and the array is lying on the ground.
Institute of Sound Recording, University of Surrey, UK
ABSTRACT
This research introduces a novel technique for capturing binaural signals for objective evaluation of spatial impression; the technique allows for simulation of the head movement that is typical in a range of listening activities. A subjective listening test showed that the amount of head movement made was larger when listeners were rating perceived source width and envelopment than when rating source direction and timbre, and that the locus of ear positions corresponding to the pattern of head movement formed a bounded sloped path – higher towards the rear and lower towards the front. Based on these findings, a signal capture system was designed comprising a sphere with multiple microphones, mounted on a torso. Evaluation of its performance showed that a perceptual model incorporating this capture system is capable of perceptually accurate prediction of source direction based on interaural time and level differences (ITD and ILD), and of spatial impression based on interaural cross-correlation coefficient (IACC). Investigation into appropriate parameter derivation and interpolation techniques determined that 21 pairs of spaced microphones were sufficient to measure ITD, ILD and IACC across the sloped range of ear positions.
Ishinomaki Senshu University, Ishinomaki, Miyagi, Japan
ABSTRACT
Piezoelectric vibratory tactile sensors are used for measuring the softness and hardness of an object. In this research, the sensitivity of the longitudinal bar type tactile sensor was considered to establish the design guideline of the vibratory tactile sensor. First, the sensitivity on the frequency change of the longitudinal bar resonator in contact with an object was approximately derived for designing the vibratory tactile sensor. It was experimentally clarified that the sensitivity was inversely proportional to the equivalent mass of the bar resonator. Next, the shape of a longitudinal bar resonator was calculated using the finite element method for improving the sensitivity on the vibratory tactile sensor. It was shown that the sensitivity was increased using the horn type tactile sensor. The obtained results will be useful for designing the piezoelectric vibratory tactile sensor.
Peutz bv, The Netherlands
ABSTRACT
The random incidence absorption coefficient is measured in a reverberation room according to the ISO354 or ASTM C423-09. According to these standards, the diffusivity of a reverberation room is usually obtained with panel diffusers. Besides the fundamental problem that a reverberation room with a highly absorptive specimen is not diffuse, these panel diffusers introduce a number of uncertainties like the acoustical effective volume and the total boundary surface of the reverberation room. It is clear that an overestimation of effective volume will result in an overestimation of the absorption coefficient. This might be one of the causes that some laboratories are structurally able to measure absorption coefficients larger than 1, even if the volume of the specimen, edge absorption and the absorption of the surface covered by the specimen are taken into account.
To reduce the difference in measurement results between different laboratories, and to reduce the overestimated absorption coefficients, the possible use of volume diffusers instead of panel diffusers is investigated. Besides the advantage of exact values for room boundaries and volume, the following criteria are investigated to substantiate the hypothesis that volume diffusers lead to better results: (1) Difference between measured and theoretical absorption coefficients for locally reacting surfaces. (2) Deviation between microphone - source measurements. (3)Influence of place of specimen. (4) Influence of basic room proportions. The investigations have been performed with the aid of a 1:10 scale model of and the reverberation room itself at the Peutz laboratories. The results will be presented in this paper.
Andong National University, Korea
ABSTRACT
In this paper, directional sensitivities of the fiber optic acoustic sensor arrays are showed experimentally. Three diffrent directions were selected as vertical, horizontal, and longitudinals. Fiber optic sensor was made by using aluminum mandrel which is hollow cylinder and about 50m optical fibers were wounded on the mandrel. Non-directional sound speaker was used as a sound source. Sagnac interferometer was used to measure the sound source. Two fiber optic sensors are used to make arrays. Measurement sound signal was showed in the frequency domain and these results were compared to the microphone's detected signal. Based on the experimetal results sensitivity of the fiber optic acoustic sensor is depended upon the mandrel directions.
Institute Of Acoustics, Chinese Academy Of Sciences, Beijing, P.R.China
ABSTRACT
Distributed measurement based on Local Area Network (LAN) is popular in measuring machines' vibration, of which the timing requirement is becoming increasingly stringent. Traditional synchronal methods, such as Network Time Protocol (NTP), Simple Network Time Protocol (SNTP), can achieve accuracy of microseconds, but it doesn't meet the requirements. At the same time, the sync cable is not competent for long dsitance transmission. A synchronal de-vice based on Precision Time Protocol (PTP, IEEE1588) is designed. A Field Programmable Gate Array (FPGA) is emplaced between Medium Independent Interface (MII) and PHY. Normal data packets pass by without modification, and IEEE 1588 packets are unpacked and stamped according to the accurate time. The FPGA, which also has a man-agement unit to process IEEE1588 events, compensates the drift of crystal oscillator that is notable during the sync interval by using discrete linear Kalman filter. Trigger can be programmed, of which the output is used to driver the sample unit. Acoustic measurement equipments base on the design. Rough experiment shows that it satisfies the need.
School of Mechanical Engineering, Georgia Institute of Technology, USA
ABSTRACT
Systems involving acoustic propagation are frequently characterized as being linear time-invariant (LTI) systems that can be fully described by an impulse response. In reality these are better described as systems with slow time variance and non-essential nonlinearity that can be reasonably approximated by LTI systems when they are excited by small amplitude disturbances. Experimentally determining the parameters of the appropriate LTI system approximation involves an inherent tradeoff between noise and distortion. At small drive levels, noise is problematic; whereas at larger drive levels, nonlinear distortion rather than noise limits the quality of the approximation. For practical purposes, the quality of a measurement is determined by the signal-to-noise-and-distortion (SINAD) ratio associated with it rather than merely the signal-to-noise ratio (SNR) of the measurement. Unfortunately, the nonlinear distortion contribution to the denominator of the SINAD cannot be independently measured as the noise can (albeit not concurrently).
It is proposed that a reasonable approximation to the SINAD can be achieved by applying a linear pulse compression to measured data such that the pulse response of the system along with some noise may be temporally separated from a significant portion of the measured noise and distortion. The approximate ratio can then be formed with the energy computed for each of these separated components. This scheme was tested on three different systems: a model system with quadratic non-essential nonlinearity, a physical system involving wave propagation in damp compacted sand that exhibits a variety of nonlinearities, and a physical system involving ultrasonic propagation in a tissue-mimicking phantom that involves slow time variance and both thermal and mechanical nonlinearities. The approximation appears to be a suitable surrogate for the SINAD in each of these cases.
Toyama Prefectural University, Toyama, Japan
ABSTRACT
A fast head related transfer function (HRTF) measurement system based on the Helmholtz reciprocity principle was built, and near-field HRTFs were measured using two types of miniature speakers. The HRTF measuring system can measure 36-channel HRTFs at a time using five word clock analog-to-digital convertors, which are PreSonus multi-track recording equipment. HRTFs can be measured with good signal-to-noise ratios from 250 Hz to 13 kHz using a DTEC-30008 (Knowles Electronics); the miniature speaker has a high output level in the low and the mid frequency range. HRTFs can be measured with good signal-to-noise ratios from 2 to 20 kHz using an ED-29689 (Knowles Electronics); the miniature speaker has a high output level in the high frequency range. HRTFs cannot be measured accurately below 250 or 400 Hz with either speaker because the signal-to-noise ratio of the TSP signal responses is too low. Although the sound pressure level is higher at near-field (0.2-m distance) than at far-field (1-m distance), the signal-to-noise ratio below 400 Hz is not sufficient in either case. These results indicate that the reciprocal HRTF measuring system can measure HRTFs from 400 Hz to 20 kHz quickly and accurately; the frequency range, however, depends greatly on the frequency response of the miniature speaker placed in the outer ear canal.
(1) Shibaura Institute of Technology, Japan (2) Tokyo Metropolitan University, Japan (3) Akita Prefectual University, Japan (4) Tokyo Metropolitan College of Industrial Technology, Japan
ABSTRACT
MRI equipment is important for medical diagnosis. The driving sound is loud, and the noise levels exceed 100dB. It is unpleasant for patients. The operation of the gradient magnetic field for imaging generates the loud sound. The gradient magnetic field of the equipment of high magnetic field is generated with the coil of three orthogonal axles. Many researches only show the sound driven by the synthesized the gradient magnetic field of three axles. In this paper, the characteristic of the sound by the operation of the gradient magnetic field only of one axle by the self-made sequence is shown. The measurement equipment is Signa Horizon LX 1.5T of GE Yokogawa medical systems. This equipment has the performance of the static magnet field of 1.5 teslas, gradient magnetic field of 22mT/m, and slew rate of 77T/m/ms. In this measurement, the gradient magnetic field is controlled without the radio frequency pulse that generates the noise signal. The sound is measured near the center in the bore of the equipment. In this measurement, the strength of the gradient magnetic field was changed, and the linearity between the sound pressures was clarified.
(1) Tohoku University, Sendai, Japan (2) JST,CREST, Chiyoda, Tokyo, Japan (3) Ball Semiconductor Inc., Japan
ABSTRACT
For efficient control of fuel cells, sensitive measurement of hydrogen gas concentration under high humidity is required. We succeeded in detecting 10 ppm hydrogen by the ball surface acoustic wave (SAW) sensor with a PdNi sensitive film (hydrogen absorbing alloy) owing to the ultramultiple roundtrip of SAW for more than 100 turns. However, there is a problem of degradation of the film by humidity. Though a planer SAW sensor with a ZnO/Pt sensitive film was reported as a humidity-proof hydrogen sensor, detection of low concentration hydrogen as low as 10 ppm was not verified.
In this study, we aimed to develop a humidity-proof ball SAW sensor with a ZnO/Pt sensitive film for low concentration hydrogen. We fabricated the sensitive film on a3.3 mm langasite ball SAW sensor by RF magnetron sputtering, where 200 nm ZnO film was deposited, and then 5 nm Pt film was deposited as a catalyst. We measured the amplitude change response to hydrogen in nitrogen carrier gas with the concentration of 200 ppm, 100 ppm, 50 ppm, and 20 ppm at 80℃. Even the 20 ppm hydrogen was detected as 0.8 dB amplitude change, where noise level was 0.0063 dB. After the sensor was wetted by water, the response was measured again. As a result, 20 ppm hydrogen was detected as 0.4 dB amplitude change, where the noise level was 0.00219 dB. We calculated a detection limit defined as the concentration corresponding to the signal to noise ratio of 3. Since the detection limit before and after wetting was 0.47 ppm and 0.27 ppm, respectively, we found that the ZnO/Pt sensitive film was not degraded by wetting. In conclusion, it was shown that the ball SAW sensor with ZnO/Pt sensitive film was useful for humidity-proof hydrogen sensor with sub-ppm detection limit.
(1) Tokyo Denki University, Japan (2) Tokyo University of Agriculture & Technology, Japan
ABSTRACT
With the recent escalation of environmental concerns, monitoring for the flow rates in the flumes and/or drain pipes is much demanded. Conventional ultrasonic flow meter cannot meet the request, since much of these devices are for the filled flows such as in the closed pipe. In these situations, conventional travel time method based on the through transmission observation can be applied. On the other hand, there are few methods applicable for the unfilled fluid flows in the pipe or small open channel flume. To encounter the problem, a technique was proposed using a single transmitter/receiver transducer attached at the bottom of the pipe. Pulse echo signals scattered from the particles in the medium were repetitively recorded with a constant time interval. From the declination of the correlation peaks between the echos for the repetitive excitations, flow velocity of the medium was estimated. The method has an advantage that the influence of the water surface variation can be avoided under the condition of the non-turbulent laminar flow. To show the feasibility of the technique, examinations were made for the rippling unfilled water flow in a pipe with diameter 54 mm and length 1000 mm. Starch powder was mixed as scatterers in the imitated drainage water. The flow velocity measured by the present method was compared with that of the predetermined over the range 0-5 cm/s. The results showed that the precision of the measured flow speed was satisfactory and tolerant of the rippling of the water surface in so far as the non-turbulent flow conditions were satisfied.
Technological Institute for Superhard and Novel Carbon Materials, Troitsk, Russia
ABSTRACT
The determination of properties (elastic constants) of materials by ultrasound is based on measurement of phase speed of elastic waves. The pulse ultrasound methods are used widely in modern NDT and laboratory investigation for material characterization. The accuracy of these methods strongly depends on the precision of the phase and amplitude measurements of the propagating pulses. The major factors that induce errors at measurements of sound velocity and attenuation in composites are the acoustic dispersion and diffraction. The beam diffraction effect occurs due to limited sizes of ultrasonic transducers. Using of ultrashort (wide-field) ultrasonic pulses for measurements leads to wave dispersion effect .
We carried out the comparative measurements of times of flight, speed of sound and elastic constants in nanostructured composite materials such as Al/C60, SiC/Al/nanodiamond and WC/Al/nanodiamond to reveal an impact of these factors. The pulse acoustic microscopy and laser ultrasonic techniques were used for comparative investigations and analysis. Short (~7 ns) Nd-YAG laser pulses generate wide band acoustic pulses in laser ultrasonic source. The optoacoustic transducer bandwidth was 0.1-15 MHz. The wide-field pulse scanning acoustic microscope was applied in a reflection mode to measure local values of speed of sound (microacoustic technique). The ultrashort 30-40 ns ultrasonic pulses were used for measurements. The change in spectrum of ultrasonic pulses was analyzed by signal processing techniques. The elastic constants were calculated on the basis of the measured speed of sound and densities of the samples. The data obtained by both methods coincided with the accuracy of ~1% for longitudinal VL and transverse VT sound waves; and ~2-3% for elastic constants. Therefore we estimated the diffraction and dispersion errors of about 1% in the speed of sound measurements in nanostructured materials when using wide-field ultrasonic pulses for times of flight method.
(1) Shibaura Institute of Technology, Japan (2) Tokyo Metropolitan University, Japan
ABSTRACT
Magnetic resonance imaging (MRI) equipment of high magnetic field generates loud sound of around 100 dB. The purpose of this study is sound source search of MRI driving sound for hearing protector. We reported sound source search on the patient's table of MRI equipment. This paper shows the result of sound source search of the driving sound in near field of MRI equipment. Target of MRI equipment is Signa Horizon LX 1.5T for the whole body of magnetostatic field 1.5T of GE Yokogawa Medical Systems. The objects of analysis sound are auto tuning pulses and continuous driving sound of slice positioning. Auto tuning pulses are generated at the initial of imaging sequence. Slice positioning is utilized to get an image for decide the region of diagnosis. Measurement conditions are follows. Sampling frequency is 48000 Hz, analysis by sound intensity is cross-spectral method with Fast Fourier Transform (FFT), window function is Hanning window, analysis data length by FFT is 4096 points, and analysis frequency is from 176 to 1414 Hz. The result shows that maximum of sound intensity level of the driving sound was 89.3 dB and characteristic of the vector radiated from side surface of MRI gantry. It is clear that sound source of the driving sound is side shell and bore of MRI equipment.
Department of Electronics and Information Engineering, Faculty of Engineering, Toin Univ. of Yokohama, Japan
ABSTRACT
We propose a method of distinguishing a buried object using the frequency response range of the corresponding to the vibration velocity. Air-borne sound and a Scanning Laser Doppler Vibrometer (SLDV, Polytec corp., PSV400-H8) are used for non contact acoustic imaging in the extremely shallow underground. Flat speakers (FPS corp., 2030M31R) that have a sharp directivity are used as vibration sources. Plastic container (11 x 11 x 6 cm3, hollow 80g) and unglazed pot(top dia. 12 x 4 x bottom dia. 4 cm3 , hole dia. 1.5 cm, 225 g) are used as buried objects and they are buried in the sand (particle size is about 200-300um) about 2cm and 5cm depth.
First, noise waves are used for the confirmation of the buried object's frequency response range. The ground surface imaging result by SLDV is used to confirm the position of the buried objects. To confirm the frequency range, the difference of the vibration speed of the buried positions and peripheral positions are used. The frequency response range of the buried object is shown by a brightness mode image. And next, burst waves are emitted again to make a clear image. The frequency of the burst waves is set near the frequency response range. Finally, the buried object's frequency response range is checked again by the same way. The clear image is made by using the optimum frequency. We confirmed the frequency response range of each buried objects. From the indoor experimental results, the response range of the plastic container was distinctly different from the result of unglazed pot. The difference of the frequency response range of each buried object seems to depend on the difference of density and size. To examine the frequency response range of another materials such as woods and metals and to confirm whether it is possible to distinguish from other buried objects are future tasks. Besides, it is necessary to check whether outdoor experiments will produce the equivalent results.
Faculty of Engineering, Department of Electronics and Information Engineering, Toin Univ. of Yokohama, Japan
ABSTRACT
The Korotokoff sounds are used at only blood pressure measurement now. However, we think it seems that the information about the circulatory system of the human body is included in its waveform. Therefore we collected the waveform data of the Korotkoff sound and performed the data analysis. Several healthy 20's students in our laboratory and the over-sixties elderly people of the daycare center (more than 20 people) are cooperated with this data acquisition experiment. As a measurement system, a mercurial sphygmomanometer and a ceramic microphone for blood pressure measurements and a Notebook PC are used. The second sounds of the Korotkoff sounds are used to acquire stable waveform. From the experimental results, it became clear that a remarkable difference is seen in students and senior citizens.
First, the waveform of speed of the youth are big and intense, but it looks there is few change in the case of the senior citizens. Next, from the frequency analysis, we can see the minus peak frequency in the case of the youth, but can not see clearly in the case of the elderly people. Finally, from the acceleration waveform which differentiated the waveform of speed, we can confirm more than five peaks in the case of the youth clearly, however, in the case of the senior citizens, we cannot confirm well after the fourth peak. From these results, we confirmed that the information about the health is included in the Korotkoff sounds in itself at the time of the blood pressure measurement. We will collect the waveform data of the Korotkoff sounds more and we will examine whether it can be established as a healthy index value with the meaning statistically or not.
(1) Universidade Estadual de Maringá, Maringá, Brazil (2) Universidade Estadual de Campinas, São Paulo, Brazil
ABSTRACT
This study represents a comparative study between three types of impulsive noise sources. It is part of a dissertation in progress witch studies the development of materials with good sound absorption characteristics that can be applied in wet room. For this measurement, it was resorted to the impulsive method and, beyond the use of the balloons, two manual operation equipment (one made of wood and a metallic one) were developed for the impulse engendering. The reverberation time was measured in two big indoor areas, which are heated pool covers and which has high temperature and humidity levels. The impulsive response was captured by a Beringher's omnidirectional microphone of ½", model: ECM 8000, connected to the software Dirac 3.1. Four measurements with the microphone facing each side of the ambient were performed, in order to capture the medium response, once that there are differences in form and revetment of the evaluated rooms. For frequencies between 250 and 16.000 Hz, it was found that there is a good correlation between the values measured by the three types of pulses generated, with relative deviations of the order of 10%. Whereas, for frequencies between 31.5 and 125 Hz, greater deviations were obtained. It is believed that this had actually occurred due the difficulty of exciting the low frequencies by using the impulsive method, and therefore impossibiliting the measurement the reverberation time to check what type of impetus permit collecting datas close to reality, simulations with the software Ease 4.2 of reverberation time were conducted. Comparing the measured values with the simulated values, it is concluded that for ambient that has low frequency noise and not having appropriate sound source to excite the lower frequencies, the most recommended is the use of acoustic simulation methods for the realization of studies of acoustic quality in rooms. For medium and high frequencies, the pulses generated with the wooden and metal manual operation equipments were suitable for the estimation of reverberation time.
Institute of Communication Systems and Data Processing, RWTH Aachen University, Germany
ABSTRACT
The measurement of the impulse response of a time-invariant acoustical transmission path must often be performed in adverse noisy environments. The most common solution for improving the SNR of the measured impulse response is to repeat the excitation signal periodically and average the periods of the system response. By this means, the SNR can be increased by up to 3 dB per doubling the number of periods. To achieve the maximum increase of the SNR the noise signal must be white, stationary, and statistically independent of the excitation signal. If these conditions are not met, the overall SNR can decrease dramatically. Specifically, in the presence of transient noise components averaging periods with greatly differing SNRs mostly results in an overall SNR being significantly lower than the SNR of the "best" period. This effect can render a long measurement with many periods useless. In this paper a novel method for performing an impulse response measurement over multiple periods is presented which is robust against fluctuating and transient noise. It makes use of an algorithm recently published by the authors for estimating the SNR during the measurement. With the knowledge of an accurate estimate of the noise power it is possible to introduce noise power dependent weighting factors into the averaging process. Such weighting factors are derived and proved to be the optimum factors in the sense of maximizing the SNR in the resulting impulse response. The common averaging approach is contained in the presented method as a special case when the weighting factors are equal for all periods. This happens when the noise power is constant throughout the whole measurement procedure. The method proposed in this paper makes it possible to retrieve high quality impulse responses with high SNR even in changing noise conditions. Results of practical examples will be presented.
(1) Tohoku University, Sendai, Japan (2) JST, CREST, Chiyoda, Tokyo, Japan (3) TOPPAN Printing Co., LTD., Sugito, Saitama, Japan
ABSTRACT
We discovered the naturally collimated beam of surface acoustic wave (SAW) on balls and developed ball SAW sensor that has a unique advantage of ultra-multiple roundtrips more than 100 turns. We achieved for the first time, sensing range of hydrogen from 10ppm (0.001%) to 100% with a single sensor element, essential for hydrogen stations and fuel cell (FC) cars. However, on site monitoring of mixed gases of volatile organic compounds (VOC) and the FC gases (H2, N2, water vapor, and hydrocarbons) has not been realized. In this study, we propose a gas chromatograph (GC) using the ball SAW sensor and micro-electro mechanical systems (MEMS) columns to realize this goal.
The ball SAW sensors (φ3.3mm, 150MHz) were coated either by polymer sensitive films (e.g. PDMS; polydimethylsiloxane) for VOC using the off-axis spin coating, or by Pd/Pt or ZnO/Pt sputtered film for FC gases. For separation of mixed gas, two types of MEMS columns were fabricated; open tube MEMS column with moderate retention force for VOC and packed MEMS column with strong retention force for FC gases. The packed MEMS column was developed using diffusion bonding of stainless steel plates to improve the robustness and reduce the cost of previous silicon MEMS columns. It was packed either with SDB (styrene-divinyl benzene) polymer beads for humid FC gas or with active carbon powder for H2. We assembled above components into a small size GC operating at room temperature. Here, we shorten the analysis time of mixed gases with wide range of molecular weight, by switching two combinations of column sensor pairs from series to parallel. We verified that mixed VOC (CO2, C2H6, C3H8, benzene, and toluene) was separated into each component within 10 minutes at room temperature. The gas concentrations were quantitatively analyzed by fitting reference chromatograms to measured ones. In conclusion, the ball SAW GC will realize on-site monitoring of VOC and FC gases to be used at cars, home and smart grids.
University of Applied Sciences, Hamburg, Germany
ABSTRACT
Measuring body impulse responses of stringed instruments is one of the basic issues in musical acoustics and is repeatedly approached with new methods. Presuming that the resonator is a linear system, its transfer function offers a lot of information on the instrument's timbre, reverberation and directional radiation properties. Impulse responses are used as starting point for modeling approaches for example, or to investigate the relationship between particular resonance constellations and the instruments' quality. The most common method for impulse response measurements is to excite an instrument at the side of the bridge using an impulse hammer, a shaker, or a Dünnwald exciter.
This paper suggests using an alternative, simple and individually adjustable technique which delivers highly reproducible responses. The method bases on exciting the dampened strings at the bowing or plucking position by means of a thin copper wire which is pulled until it breaks. Taking into account the longitudinal and torsional movements of a bridge caused by string deflections, the stimulus of the body is much closer to the musical application. Since the geometric configuration of the measurement setup can be exactly specified, the proposed method allows for highly accurate repetition in comparative studies. In the paper, the setup including a fully automated exciting apparatus as well as a 'silent' quadrochord is described in detail. In addition, since the method is developed in the context of a research project on violin sound quality, an application is presented, where the technique is used to measure binaural impulse responses of violins.
(1) School of Transportation Science and Engineering, Harbin Institute of Technology, Harbin, P.R.China (2) School of Computer and Information Engineering , Beijing Technology and Business University, Beijing, China 100048
ABSTRACT
This paper presents the development of delay pulse circuit with 1ns resolution for both driving ultrasonic array trans-ducers and digital receive beamforming for ultrasonic phased array system. The circuit which can supply pulses with a 1ns time resolution employs the phase shifting with phase locked loop (PLL). In this way, 6 phase clocks with 1ns phase difference are generated, which are used to drive channel counters. Delay pulses of each channel are generated by counter. Based on this circuit, a design of digital receive beamforming with 1ns delay resolution is also presented. The design adopts a phase shifting technique of PLL to generate six clocks with 6ns periods and 1ns phase difference, which are selected as a non-uniform sampling clock to drive the A/D Converters according to the receive focal points. The delay pulse circuit is built with low-cast Field-programmable-gate-array (FPGA) components. By employing the novel architecture, the circuit has been achieved in FPGA. The simulation results are compared with two programmable delay chips and show that the proposed architecture achieves satisfactory performance for pulse delay.
Department of Telecommunications, Széchenyi István University, Győr, Hungary
ABSTRACT
Measurements of the transfer function of headphones and earphones are made on dummy-heads or on ear simulators. This paper introduces measurement problems of newly designed in-ear phones, often called as micro-driver equipment. These earphones have smaller transducer diameter and are equipped with rubber or spongy material to fit in the earcanal. This coupling may create an increased sound isolation, less sound pressure and a better low frequency transmission to the eardrum. The paper presents subjective evaluation of listeners who evaluated five different kinds of in-ear phones as well as results using a dummy-head. Measurement problems are highlighted pointing on new aspects of a revised dummy-head standard.
School of Engineering, Edith Cowan University, Joondalup, WA, Australia
ABSTRACT
In this paper we give a short review of Fibre Bragg Grating (FBG) sensors for the detection of acoustic signals, in particular ultrasound. The primary advantage of FBGs as sensing elements is their spectral encoding of the measurand, which can be either strain or temperature. However, spectral decoding methods cannot be utilized to detect high frequency signals due to their inherent low speed. We review the interrogation method required for the high speed detection of high frequency signals, in addition to discussing the theory behind FBGs as sensors. A number of applications of FBGs will be outlined for these FBG acoustic sensors, including in-vivo biomedical sensing, acoustic hydrophones, non-destructive evaluation and structural health monitoring.
In addition to this introduction to the field of FBG acoustic sensing, we also present recent results on the implementation of a novel cost effective detection system. The FBG detection system developed to convert the strain induced spectral shift of the FBG into an intensity modulation is called a Transmit Reflect Detection System (TRDS). The TRDS is an extension to the standard power detection method for FBGs. In conventional power detection schemes, the reflected portion of the incident spectrum is monitored to determine the change in the measurand. In the TRDS, both the transmitted and reflected portions of the input spectrum, from a narrow band light source, are utilised. The optical power of the transmitted and reflected signals are measured via two separate photoreceivers. As the spectral response of the FBG shifts due to the measurand, the transmitted power will increase, and the reflected power will decrease, or vice versa. By differentially amplifying the transmitted and reflected components, the overall signal is increased. This results in improved sensitivity and efficiency of the photonic sensor. We show results for the sensitivity and dynamic resolution of the detection system.
Shibaura Institute of Technology, Tokyo, Japan
ABSTRACT
An oblique incident absorption coefficient is usually discussed in a reduction of a road traffic noise. A typical measuring system is using an omni-drectional microphone and an omni-directional speaker. However, in the case of the incident angle is large, a seperation between a incident wave and a refrective wave is difficult by the typical method. This paper describes the proposed method that using the omni-directional sound source and the directive microphone. And we show the results that are measured by the proposal meshod.
Institute of Acoustics, Chinese Academy of Sciences, Beijing, P.R.China
ABSTRACT
The rotating and reciprocating machines, like engines, electromotors, pumps, have a broad-spectrum application in modern ship design and manufacture. As a complex system, it is well known that some incidental malfunction of those equipments may give cause for critical accident during navigating. In order to improve the safety and reliability of the ship, a distributed system based on network is proposed and developed to monitor the important equipments. The monitoring system, consisting of a console, several sampling devices and hundreds of sensors, is designed for permanent installation in ships under realistic working conditions. The vibration signals are filtered and converted by the sampling devices and sent to console via the LAN(Local Area Network). The information is analyzed and stored by the computer assembled into the console, to realize the ship state monitoring and faulty detection. Considering the convenience of the setting sample rate for each channel respectively, all signals are sampled by the highest sample rate to satisfy the Nyquist Theorem, then filtered and downsampled in FPGA(Field Programable Gate Array). Up to now, the hardware, embedded software and information manage software are achieved.
(1) Graduate School of Engineering, Oita University, Oita, Japan (2) Department of Architecture and Mechatronics, Faculty of Engineering, Oita University, Oita, Japan (3) Venture Business Laboratory, Oita University, Oita, Japan
ABSTRACT
This paper investigates by experiment the absorption characteristics of several materials associated with the proposed acoustics impedance method using the combination of sound pressure and particle velocity sensors in a various sound field. This method is based on the concept of "ensemble averaged" surface normal impedance that extends the usage of obtained values to various applications such as architectural acoustics and computational simulations. The measurement technique itself is an improvement of the method using two-microphone technique and diffused ambient noise, as proposed by Takahashi, Otsuru et al. A series of measurement in different sound fields were conducted to expand the relevant applicability of in-situ measurement using pu-sensor. The first part of the experiment aimed to confirm the reproducibility of the measured values of the method. Here, comparative round robin measurements in four reverberation rooms were conducted to ensure that the results could be obtained with reasonable accuracy. An accompaniment discussion on general tendencies and discrepancies of ten materials between the various reverberation rooms are provided. In the second stage, a trial application with four types of selected materials with reliable specimen size was carried out to impress the ubiquitous examination of material's absorption characteristics at different sound fields such as in architectural spaces. This paper revealed the reliability, applicability and robustness of the method throughout the investigation as in-situ measurement.
NHK Science and Technology Research Laboratories, Tokyo, Japan
ABSTRACT
Several new sound systems have been proposed to provide enhanced spatial impression compared with conventional 5.1 surround sound. Such systems require more loudspeakers than the 5.1 system to deliver a superior sound impression, but it can be difficult to introduce these systems into the typical home environment.
This paper describes a new method for converting the signal of the original sound system into that for an alternative system with a different number of channels, while maintaining the physical properties of sound at a listening point in the reproduced sound field. The physical properties maintained are the sound pressure and the direction of particle velocity. Based on the coordinates of the loudspeaker's position, the method calculates a conversion matrix as the solution. The conversion matrix is not dependent on frequency, and therefore it would be expected that the proposed method does not change the timbre of the reproduced sound.
22-channel signals from a 22.2 multi-channel sound system were converted into 10-, 8- and 6-channel signals with the method. Subjective evaluation based on ITU-R recommendation BS.1116-1 showed that the converted 8-channel sound gave almost the same spatial impression as the original 22-channel sound, meaning that the proposed method could largely reproduce the original 22-channel sound field with 8 loudspeakers. The paper also shows the recommended arrangement of 8 loudspeakers for reproduction of 22.2 multi-channel sound based on the subjective evaluation test.
School of Mechanical Engineering, The University of Adelaide, SA, Australia
ABSTRACT
The use of aeroacoustic beamforming has increased dramatically in the past decade. The primary driving force behind this has been the need to improve the noise characteristics of aircraft and automotive vehicles, coupled with ever increasing computer processing power. Aeroacoustic beamforming is an experimental technique that uses an array of microphones located in the far field of acoustic noise sources generated by a body in air flow. Each microphone measures an acoustic magnitude and relative phase based on its unique position with respect to the acoustic source(s). Beamforming algorithms process this data, typically to generate spatial noise source plots over a two dimensional grid at each frequency of interest. Much of the available aeroacoustic beamforming literature presents results at relatively high frequencies corresponding to large facilities, scale models, and available budgets, which can potentially set unrealistic goals for the development of a small-scale university research facility. This paper details the design and calibration of a small aeroacoustic beamformer, designed to investigate airfoil trailing edge noise for low to moderate Reynolds number flows. The optimisation of the microphone array, based on spatial, air flow and financial constraints, is presented. The algorithms which were used to calculate the beamformer outputs are described, as well as the array calibration process, including beamforming of various noise sources in an anechoic environment. The array is shown to successfully detect and accurately locate both tonal and broadband noise sources.
(1) Department of Geology, University of Leicester, UK (2) Geotechnical, Geophysical Properties and Processes, British Geological Survey, UK (3) School of Engineering, University of Warwick, UK (4) Kings College, London, UK
ABSTRACT
This research demonstrates that it is possible to transmit acoustic signals in the frequency range 100 Hz - 100 kHz through objects in air. The technique can thus be used to examine the contents of containers. Of particular interest are the possible applications in cargo screening, i.e. future uses at border crossings and transport hubs, to find illicit cargo. A feasibility study has been performed, which has shown that acoustic signals can be transmitted through a simulated container, with curtain side-walls as usually seen on European road transportation vehicles. It has been found that it is necessary to use frequency-modulated acoustic waveforms, together with cross-correlation, to obtain sufficient signal to noise ratios to affect a measurement. In addition, research has shown that novel forms of signals, involving discrete frequencies, can help in obtaining more information. Time reversal techniques have also been shown to help in the identification of resonances within the cargo container, which have been correlated to the positions of objects within it. These resonances will form the basis of future research into acoustic tomographic imaging of such containers for security screening applications.
(1) Department of Signal Theory and Communications, Universidad Carlos III de Madrid, Spain (2) Multimedia Communications and Signal Processing, University Erlangen-Nuremberg, Germany
ABSTRACT
Acoustic echo cancellers (AEC) are becoming increasingly important because of the widespread use of hands-free devices. Due to their simplicity, most of the cancellers rely on NLMS-type adaptive filters to model and track the time-varying echo path. Recently, adaptive combinations of filters are gaining increased popularity as a flexible and versatile approach to overcome compromises inherent to adaptive filters, thus enhancing the overall performance. Regarding AEC scenarios, such filter combinations have already been proposed for, e.g., improving the trade-off between convergence speed and steady-state error or for reducing the dependency on varying ratios of linear and nonlinear distortions. In this paper, we present a new AEC approach, showing improved performance for unknown or time-varying signal-to-noise ratios (SNR). The proposed scheme exploits the fact that the coefficient energy of a typical echo path is not uniformly distributed, but decays exponentially. Under this condition, an NLMS filter will introduce significant estimation errors for less significant filter taps due to gradient noise. Since the number of affected coefficients strongly depends on the present SNR and hence the implied noise floor, the cancellation performance may degrade considerably for low SNRs.
In order to relieve the coefficient noise, the adaptive impulse response is split into a number of non-overlapping blocks, each of which is combined with a virtual 'zero-block', having fixed zero coefficients, with time-varying relative weights for both the nonzero and the zero-block. In practice, this results in a possibly biased estimation of some of the filter coefficients. However, it has been shown that such estimates can yield advantages in terms of mean-square error, especially for low SNRs. The combination of each block is implemented by a convex mixing, where the control parameter is updated according to a stochastic gradient descent method so as to minimize the global error of the AEC. For moderate block numbers, the increment in computational cost over a conventional NLMS canceller is negligible. In particular, this contribution investigates the operation of the blockwise combined filter for low SNR conditions, comparing its performance with standard NLMS- and PNLMS-type filters as well as adaptive algorithms using exponentially weighted step sizes. The robustness and benefits of the proposed approach are thereby experimentally verified for noise and speech inputs. Moreover, the influence of the number of blocks and the mixing parameters is also studied and indications on future work (impulse noise, extension to nonlinear filters) are given.
Electrical and Computer Engineering Faculty, Shahid Beheshti University, Tehran, Iran
ABSTRACT
In this paper we consider coding of wideband speech and audio signals using spectral replication based on parametric stereo coding technique. This technique is used to code stereo signals with a good correlation between left and right channels. The goal of the method is to reduce redundancy between these channels. A mono downmix signal, made by linear combination of the two channels in the frequency domain, plus small side information is transmitted instead of the two channels. The combined signal is transformed to the time domain and then encoded with any existing coder and transmitted as the main part of the bit-stream. The side information includes four parameters: Interchannel Intensity Difference (IID) that shows a logarithmic ratio between the energy of the channels in each subbands, Interchannel Phase Difference (IPD), Overall Phase Difference (OPD) to represent phase difference between the downmix signal and one of the channels and finally Interchannel Coherence (IC) which is defined as the normalized cross-correlation coefficient. In the decoder both left and right channels are reconstructed from the mono downmix signal and the received parameters. In order to extract the side parameters, the bandwidth is divided into some non-uniform subbands. These subbands are finer at lower frequency and wider at higher frequency, compatible with human auditory system. To achieve a better quality or a smaller bit rate the update rate of parameters can be changed.
We have adopted this technique to create a new spectral replication coding for mono signals. In our method, we divide a mono signal into two low and high frequency parts. We assume that the low frequency part plays the role of the downmix (main) signal. In the encoder we decode and reconstruct the low frequency part in order to calculate the necessary parameters. In doing so, however, we need to extract and send three parameters not four. The OPD is not needed. At the decoder side the high-frequency part is reconstructed from the decoded low-frequency part and the transmitted parameters. We use a scheme based on Wavelet Packet Transform, perceptual and variable bit rate coding using Huffman/Run length coding tables of JPEG type for the low frequency part and 15 parameters (using 55 bits at most) for coding the high frequency part of frames of 32 msec. The preliminary results show that we can achieve 8-10% reduction in bit rate keeping the same PESQ values.
Institute of Technical Acoustics, RWTH Aachen University, Germany
ABSTRACT
The levels of different sources (CD, radio, MP3, navigation, etc.) differ substantially making one single volume for all annoying. Even individual
level settings for all sources are not helpfull since loudness and dynamic range of the source programs may vary. Therefore the driver (namely when listening to classical music) is forced to readjust the volume continously so not to miss his beloved music or not to be hooted down by the radio info program in between. A solution is presented that on one hand uses a sofisticated algorithm to measure the background noise inside of the car and on the other uses the real loudness information (Zwicker) from the running program to adjust a comfortable level and reduce dynamic shifts.
(1) University of New Brunswick, Canada (2) Central University of Las Villas, Cuba
ABSTRACT
This work is focused on modeling the perception of tremor found in pathological voices. The main research objective is to automatically separate the different sources of tremor to estimate the magnitude of tremor perturbations using signal processing techniques. A new assessment algorithm is derived from recorded speech that combines non-linear filtering, amplitude demodulation and spectral estimation techniques. The algorithm is able to separate tremor sources originated in the glottal area from the vocal tract combining both sources to develop an objective acoustic measurement of tremor perturbations. The algorithm is evaluated against the perceptual judgments provided by speech pathologists and other reported indexes with excellent performance. It is shown the benefit of estimating independent sources of tremor to differentiate normal from pathological tremor and to model the perception of tremor perturbations.
(1) Galgotias College of Engineering and Tecnology, Gretar Noida, India (2) Daylbagh Educational Institute, Dayalbagh, Agra, India
ABSTRACT
Acoustic signal analysis has number of applications. The analysis can be carried out in time as well as frequency domain. Here the acoustic signals of unknown mechanical devices have been analysed in frequency domain and have been recognized. The paper includes analysis in using Fast Fourier Transforms and Wavelet Transforms and the results have been found to be satisfactory.
Institute of Acoustics, Chinese Academy of Sciences, Beijing, P.R.China
ABSTRACT
Condition monitoring and fault diagnosis is essential to the effectiveness and reliability of machinery. To improve the accuracy of fault diagnosis, a novel diagnostic model based on support vector data description (SVDD) and Dempster-Shafer (D-S) evidence theory is proposed. In the method, time and frequency domain fault features are firstly extracted, and used as input vector of single SVDD fault classifier, which is trained according to normal and few faulty data. Then take identifying result of single SVDD classifier at different measuring point around machinery as independent evidence source, and so the evidence set is constructed. Based on unified discernment frame of fault diagnosis, all evidences are aggregated by Dempster's combination rule. Through multi-level inforation fusion, it can make full use of measuring information and resolve the problem of single classifier's misrecognition. Experiment results show that proposed algorithm improves identification precision of fault diagnosis and deal with the contradition between classifiers effectively.
Ritsumeikan University, Kusatsu-shi, Japan
ABSTRACT
The sound source localization plays an important role for extracting the target sound. In this paper we describe the localization of multiple sound sources using a distributed microphone system that is a recording system with multiple microphones dispersed to a wide area. Our algorithm localizes a sound source by finding the position that maximizes the accumulated correlation coefficient between multiple channel pairs. After the estimation of the first sound source, a typical pattern of the accumulated correlation for a single sound source is subtracted from the observed distribution of the accumulated correlation. Subsequently, the second sound source is localized by finding the maximum correlation again. To evaluate the effectiveness of the proposed method, experiments of multiple sound source localization were carried out in an actual office room. The result shows that average error distances of the multiple sound sources are less than 13.7cm. Our localization algorithm could realize the multiple sound source localization robustly and stably.
(1) INMC, School of EECS, Seoul National University, Seoul, Korea (2) Electronics and Telecommunications Research Institute, Daejeon, Korea
ABSTRACT
In spatial audio reproduction, loudspeakers are arranged in the horizontal plane in most instances. In these ways, acoustic images are restricted to be placed on the projected plane. For the reproduction of 3-D acoustic images, the horizontal plane of loudspeakers should be extended to vertical directions using additional loudspeakers. Although binaural technique could be used by head-related transfer functions (HRTFs), it has a constraint that is reproduced only by headphones or earphones. In this paper, double layered loudspeaker arrays were used to reproduce virtual sources on the vertical plane in front of a listener. The system is built up with 32 channel loudspeakers. Each array has 16 channel loudspeakers with 17cm spacing. Two layers have 1.65m interval in height. For the localization of virtual sources in both azimuth and elevation, spatial sound rendering technique is proposed based on Wave Field Synthesis (WFS). First of all, two dimensional wave filed is synthesized by virtual loudspeaker array which is located between two real array layers. Each column of upper and lower loudspeaker pair generates virtual loudspeakers, and elevation vectors of each loudspeaker are calculated from the layout of virtual source, upper and lower loudspeaker pairs. Then each virtual loudspeaker signal is extended to real upper and lower loudspeaker pairs by amplitude panning using elevation vectors. Only vertical amplitude panning was used because horizontal acoustic images are localized by 2-D WFS. Subjective listening tests were conducted for the evaluation of frontal localization in this system. In the first experiment the accuracy of listening test system were carried out. In a second experiment the frontal localization test using 3-D vector base amplitude panning (VBAP) was carried out for the comparison with proposed rendering method. In the last experiment the frontal localization of proposed WFS rendering method was evaluated.
(1) National Insitutute of Information and Communications Technology, Japan (2) Graduate School of Engineering, Kyoto University, Kyoto, Japan
ABSTRACT
We propose a 3-D sound reproduction system based on the boundary surface control principle (BoSC system) and evaluate its performance via demonstration and exhibition.
The BoSC reproduction system, dome-shaped and constructed of wood, consists of 62 full-range loudspeakers and eight subwoofer loudspeakers.
The BoSC recording system is designed from ${rm C}_{80}$ fullerene consisting of 70 microphones of a 46-cm diameter.
In the listening room, 62 full-range loudspeakers assisted by the designed inverse filters reproduce sound fields identical to the primary sound fields by reproducing sound pressure on the 70 microphones which surround the listener's head.
The BoSC system requires huge numerical calculation to reproduce authentic 3-D sound fields.
Consequently, a pre-convolution calculation of the inverse filters is required to reproduce and transmit these fields.
Therefore, to realize a real-time 3-D sound field reproduction system, we investigated optimization of the loudspeaker and microphone configuration using Gram-Schmidt orthogonalization.
In the BoSC system, the inverse filters are determined by an inverse system of a transfer function matrix measured between each loudspeaker and microphone pair.
Therefore, a transfer function matrix with a huge condition number degrades the accuracy of the reproduced sound fields.
The selection of loudspeakers in the active control system that includes the BoSC system is equal to the selection of a vertical vector in the transfer function matrix.
This means that for the reduction of the number of loudspeakers the vertical vector is selected up to the required numbers.
By applying Gram-Schmidt orthogonalization to the selection of loudspeakers, the loudspeaker is selected in the order of linear independence from highest to lowest.
In this paper, the effect of the reduction of loudspeakers and microphones is evaluated by the subjective assessment of a sound image localization test.
Department of Electrical Engineering and Computer Science, Iwate University, Morioka, Japan
ABSTRACT
In this paper, we describe a method for estimating a crack position in a concrete structure using several accelerometers. An array of accelerometers is installed to the concrete structure and a low frequency vibration made with a small impulse hammer is used instead of a high frequency vibration since a higher frequency vibration decreases rapidly compared to a low frequency one. A reflection wave is generated from the crack position if a crack exists. Because the concrete structure is elastic, it has three wave-propagation modes: the surface-wave mode, the primary-wave mode, and the secondary-wave mode. It is difficult to estimate the position precisely because the power of necessary primary-wave mode is weaker than that of surface-wave mode. To estimate the crack position precisely, we have proposed a method for eliminating the unwanted surface-wave and side-wall reflections, in which five parameters are used to estimate an unwanted surface-wave or a side-wall reflection by least mean square technique. Since it takes, however, a very long time to estimate a single unwanted wave (a surface-wave or a side-wall reflection), the method did not work in practice if two waves overlap with each other. (10 parameters are necessary in this case.) Therefore, we propose in this paper the use of GA (genetic algorithm). As a result, the processing time was shortened dramatically compared to conventional one, and we could distinguish two waves reflected from two close boundaries of a caisson.
University of Massachusetts, Dartmouth, USA
ABSTRACT
The Big Brown Bat (Eptesicus fuscus) uses Frequency Modulated (FM) echolocation calls to accurately estimate range and resolve closely spaced objects. Recent work by Fontaine and Peremans have shown that a sparse represen-tation model for bat echolocation calls facilitates distinguishing objects spaced as closely as 2 micro-seconds in time-delay and was also robust to noise over a realistic range of signal to noise ratios (SNR). Fontaine and Peremans used the random FIR filter Compressive Sensing (CS) technique as their input method. Their study demonstrated that the undersampled data provided by the FIR filter output still contains sufficient information to accurately reconstruct and resolve sparse target signatures using L1 minimization techniques from CS. Their work raises the intriguing question as to whether under-sampled sensing approaches structured more like the bat's auditory system still contain the in-formation necessary for the hyper-resolution observed in behavioral tests. This research investigates the ability to es-timate sparse echo signatures using a downsampled filterbank for the sensing basis that is closer to a bat auditory sys-tem than randomized FIR filters. The returning echoes are sensed using a discrete-time constant-bandwidth filter bank followed by downsampling that loosely resembles the filtering and smoothing of the bat's cochlea. L1 minimiza-tion then reconstructs the sparse target return from this under-sampled signal. Initial simulations demonstrate that this filterbank CS model reconstructs sparse sonar targets with a high degree of accuracy while substantially undersam-pling the filter outputs. In addition, the overdecimated filterbank CS approach has better target resolution than the Matched Filter for SNR values ranging from 5-45 dB and has better detection performance than the Inverse Filter method. This is all accomplished while undersampling the return echo signal by as much as a factor of six. The de-terministic sensing basis has the distinct advantage over the random sensing basis in the respect that the circulant structure of the filterbank sensing matrix can easily be implemented in electric circuits.
Ritsumeikan University, Kusatsu, Koyoto, Japan
ABSTRACT
Many conventional security systems only utilize visual information that is provided from a video camera. However, these systems may not be able to acquire the important scene in the blind area of a video camera or in some visual information with multiple video cameras. On the other hand, acoustic security systems can detect the sound events with sensing microphone. These systems can utilize to support conventional visual security systems with the acoustic events detection. In our research, we focus on acoustic security system in near field, and we newly design the prototype of three-dimensional acoustic security system in near field based on paired microphones and automatic video camera. This system responds to a sound event and automatically focuses the video camera on a sound source in real time. The conventional acoustic security systems detect a sound event by large scale microphone array in far field. However, the detection error increases with it in near field. In our former research, we had proposed the techniques to localize positions of a sound source in near field. In former proposal, we can confirm that proposed technique based on cross-power spectrum phase analysis with paired microphones (Multi-paired CSP) could robustly detect two-dimensional location of sound event in near field. Thus, we newly design the prototype system which utilizes the Multi-paired CSP as acoustic security system, robustly detects a sound event in near field, and then automatically steers the video camera to detected sound event in real time. In this system, we try to expand the Multi-paired CSP in the three-dimensions which consist of a horizon, an elevation and a distance as newly acoustic security system. We carried out evaluation experiments in a real environment and we confirmed that the proposed security system can robustly detect a sound event and quickly steer a video camera to it. Therefore, we could realize the prototype of acoustic security system in near field based on paired microphones and automatic video camera.
NTT Cyber Space Laboratories, NTT Corporation, Japan
ABSTRACT
The direct-to-reverberant energy ratio (DRR) has been focused as a useful information for estimating the distance from a microphone to a speaker. In particular, when the environment is highly reverberant, conventional microphone array techniques failed to correctly estimate the distance. This is because the time and sound level differences of arrival of direct sound between microphones, which are exploited as keys to know the sound source positions, becomes ambiguous due to the existence of reverberation. However, even in such environment, we can still estimate distance from DRR because DRR keeps its one-to-one relation between the distance in reverberant environment.
The most primitive way to know DRR is calculating it from the impulse response between the source and microphone, however, it is quite cumbersome due to prior measurement of the impulse response is required. To overcome this restriction, a method was proposed to estimate DRR directly from the received sounds. It utilized a binaural input signal and estimated the energy of the reverberant component by eliminating the direct component using equalization-cancellation (EC) technique. However, the EC technique loses its accuracy of DRR estimation in highly reverberant environment because it stands on a model that no reverberation component propagates from the same direction of the sound source. On the other hand, we have proposed a DRR estimation method using D/R(direct sound to reverberation) spatial correlation matrix model (we call "DRSC model" hereafter), which consists of the spatial correlation matrices of direct sound and reverberation. The DRSC model assumes that the direct sound propagates only from the direction of sound source but the reverberation arrives from every direction uniformly. Then, we calculate DRR from the power spectra of both components, which are estimated from the correlation matrix of the observed signals. In this contribution, we evaluate the adequacy of DRSC model by using DRR estimation as an evaluation criterion. We first investigate the accuracy of DRR estimation based on DRSC model under various conditions and then discuss the validity of the model. Furthermore, we also compare the results of DRR estimation with conventional method based on EC technique.
User Interface Laboratory, KDDI R&D Laboratories Inc., Japan
ABSTRACT
This paper presents a sample-wise acoustic positioning method using natural gradient adaptation to track fast source location change. We are studying a sound imaging system with binaural reproduction setup for virtual and augmented reality applications such as a navigation system for pedestrians on mobile devices. The system uses stereo earphones, binaural microphones, and a hand-held mobile phone that emits a measurement signal for head tracking and source positioning. In this study, we developed a sample-wise acoustic positioning method using multiple receivers based on natural gradient adaptation. This method directly estimates the three-dimensional source position on the spherical coordinate system defined by the receivers' positions by minimizing the cost function. We derived this method under the constraint that the relative positions of the receivers are spatially-fixed on the spherical coordinate system. The cost function is defined as a residual sum of squares between the actual and estimated source signals. The estimated source signals are namely those for which the time delay and the amplitude are compensated depending on the estimated source position. The proposed method executed sample-wise processing for 48 kHz sampling in real-time. It reduces 90% of the azimuth error compared to the conventional correlation-based method at a movement speed of over 30 degrees per second, which corresponds to natural head turning. The azimuth error was within 1 degree, which means this method produces sufficient accuracy for the sound imaging system.
(1) Graduate School, The University of Tokyo, Japan (2) Institute of Industrial Science, The University of Tokyo, Japan
ABSTRACT
Swept signals for acoustic measurements are widely used nowadays to obtain impulse responses of the system under test. The overall spectrum and the inverse filter that compresses the sweep into an impulse together with the background noise conditions prescribe the result's signal-noise ratio as a function of frequency. This paper proposes a time-domain sweep synthesis method using composite square and monomial power function modulated sine sweeps that can customize the resulting SNR-frequency function. Theoretical and practical aspects as well as measurement results are presented.
(1) Research Institute of Electrical Communication, Tohoku University, Sendai, Japan (2) Faculty of Engineering, Shinshu University, Nagano, Japan
ABSTRACT
Sound image localization of binaural signal can be controlled by convolving listener's head-related transfer functions (HRTFs) corresponding to each sound source to be rendered. Using this technique, a virtual auditory display (VAD) can be constructed, which can display sound images at arbitrary positions. The VAD applying this architecture should have a set of HRTFs of a listener to be given in advance. However, it is difficult to measure HRTFs in all directions around the listener. Therefore, an interpolation method of HRTFs measured at discrete positions is needed for such a VAD system. Previous studies investigated methods in which linear interpolation is used in the time or frequency domain. These methods have good accuracy when directions corresponding to the HRTFs used in interpolation are sufficiently close to each other. In contrast, when the directions are not close, the accuracy of the interpolation decreases markedly. In particular, the frequencies of peaks and notches in HRTFs become inaccurate because the frequencies of peaks and notches change according to the sound source position. On the other hand, the frequencies of notches are important cues of sound localization in elevation localization. Therefore, an interpolation method that can represent the frequencies of peaks and notches of HRTFs accurately is necessary to realize high-definition VAD based on the HRTF synthesis technique. This study proposes a novel method for HRTF interpolation. In the method, HRTFs are first modeled using the common-pole and zero model in the z-plane. Then the loci of zeros as a function of elevation are traced using the dynamic programming (DP) method based on distances in the z-plane. Then the zeros are linearly interpolated in the z-plane for the desirable directions, and the interpolated HRTF is obtained by transforming the z-plane to the frequency domain. The accuracy of the proposed method was evaluated to demonstrate that the spectral distortion can be improved. Furthermore, both the frequency and depth of notches can be interpolated accurately. Using the proposed method, the interpolation accuracy can be maintained even when directions among HRTFs used in the interpolation are within 10 degrees. In contrast, the accuracy the simple linear interpolation in the frequency domain decreased as the directional discrepancy among HRTFs increased, indicating that the number of directions of HRTFs in VAD system that must be prepared in advance can be reduced significantly using the proposed method.
(1) Department of Mechanical Engineering Graduate School, Hanyang University, P.R.China (2) Division of Auotomotive Mechanical Engineering (3) School of Mechanical Engineering, Hanyang University, P.R.China
ABSTRACT
Noise reduction of vacuum cleaner is important, according as get into standard that estimate quality of product. To reduce noise of vacuum cleaner, we need analysis of correct noise source and contribution grasping about Identified noise sources' output noise. Because noise sources' correlation exists in vacuum cleaner that is small and complicated system, analysis is not easy. In this case, we need to apply Multi-dimensional spectral analysis (MDSA) method that can remove correlation between noise sources and grasp pure contribution degree of noise sources. In this study, we take transfer path analysis between output noise and noise that measured in inside/outside of vacuum cleaner.
Kyung Hee University, Korea
ABSTRACT
This paper introduces a method to design an emergency guiding system which uses changeable directivity based on a speaker array. In general, existing emergency evacuation guiding systems depend on visual techniques like emergency lights or LEDs. Actually people in the case of fire emergency condition may not obtain a range of view because of smoke from the fire. To cope with this problem, the guiding system based on the visual techniques need to be replaced with the systems using sound from a number of speakers proposed in this paper. A fundamental method to implement the sound directivity needs to use a speaker array which consists of many speakers in a line. In this paper, we can obtain the sound directivity using the modified fundamental method which will work in several shaped buildings. We use a modified speaker array system to make a time delay and differential frequency based on Haas Effect and Doppler Effect. For more accurate direction information, a servo motor is used to each speaker of the speaker array to make sound directivity physically. In this case all speakers are serially connected for audio signal transmission in a serial fashion to achieve convenient speaker installation.
The system with more accurate direction proposed in this paper has been designed and tested as follows; first, we study to find the appropriate frequency range and sound pressure level in a noisy emergency environment. Second, we experiment the difference between a normal speaker array system and a physically changeable directivity speaker array. Lastly, we verify the efficiency of our proposed evacuation guiding system with sample groups of people in a virtual emergency studio environment. As conclusion, the system proposed in this paper has achievements, such as increasing evacuation rate under emergency conditions, and serial transmission of audio signal for easy maintenance and low installation cost.
(1) Department of Physics and Mathematics, University of Eastern Finland, Kuopio, Finland (2) Diagnostic Imaging Centre, Kuopio University Hospital, Kuopio, Finland
ABSTRACT
Osteoporosis is a major worldwide health concern causing growing amount of fractures annually. Backscatter parameters derived from pulse-echo ultrasound (PEUS) measurements have been shown to relate with the bone microstructure and composition. PEUS may be applied also at most critical fracture sites of the proximal femur by using the dual frequency ultrasound technique, capable to minimize measurement errors that arise from overlying soft tissues on bone. At the proximal femur the cortical layer is often too thin to be detected with traditional peak detection methods. In this study, cepstrum method was applied for determination of thin cortical layer thickness in numerical simulations and experiments. Ultrasound propagation in a water-cortical bone-fat construct was simulated (11 simulations, cortical bone thickness varied from 0.5 to 1.5 mm) with the Wave 2000 software (finite difference time domain method). The transducer operated at 5 MHz, was 10mm in diameter and had focal length at 30mm.
For in vitro experiments, 5 thin slices of bovine cortical bone (thickness 0.5mm - 2.5mm) from tibial shaft were cut with a low-speed diamond saw. Cortical-trabecular bone samples (n =4) were sawn from the epiphysis of bovine tibia. The cortical-trabecular samples were scanned laterally to determine the mean cortical thickness with the cepstrum technique.
Acoustic measurements were conducted by using a focused transducer with the centre frequency of 2.25 MHz and UltraPAC ultrasound pulser/receiver system controlled with Labview 8.2based software. Ultrasound signals were analyzed with Matlab. Cortical bone thickness, determined with the cepstrum technique, showed good agreement with the thickness of the cortical bone in simulation geometry (r = 1.0, n = 11, p < 0.001) or with bovine bone samples in vitro (r = 0.94, n = 9, p < 0.001). The accuracy of the cepstrum method, assessed as a mean absolute error, was 320microns in vitro and 34microns in simulations. In this study, the cepstrum analysis of ultrasound reflections from the cortical bone was found to provide a reasonable estimate of the thickness of thin cortical bone layers. This method may be applied for assessment of cortical thickness at the most severe fracture sites at the proximal femur where cortical layer is thin and trabecular bone is present under the cortical layer. Moreover, it may be used for compensation of cortical bone effect from ultrasound backscatter measurements in trabecular bone and could therefore provide more reliable diagnosis of osteoporosis with ultrasound techniques.
Digital system and SoC Design LAB, Kyung Hee University, Youngin, Gyeonggi, Korea
ABSTRACT
This paper introduces a method to detect an accurate position, which includes its distance and direction, of a sound source based on a new 'T' type microphone array having four sound sensors in LabView development environment. The 'T' type microphone array consists of three linear microphones in the front side and one microphone in the center of the rear side. Existing linear microphone array types have difficulties in detecting a sound source which exists in the back side of the array. But the type proposed in this paper makes omnidirectional detection possible. The 'T' type also estimates a source distance by comparing two pressure levels from a front sound sensor and a rear sound sensor respectively. The reduction rate of a sound pressure level is used to calculate the source distance.
In general, the direction of a sound source is estimated by calculating the time delay between separate sensors. But the technique proposed in this paper uses both the time delay approach mentioned above and the geometrical method with triangular surveying relations for a higher probability value to obtain the high level accuracy of the direction calculation. Sound-related hardware design requires complex signal processing techniques with a difficulty of desired algorithm verification. In this case, LabView can be used to help intuitive PC-based programming and effective algorithm verification. Conclusively, the goal of this paper is to get the two following achievements; First, the proposed 'T' type microphone array provides effectiveness and accuracy in detecting the sound source direction and distance in comparison with existing systems. Second, we show that LabView-based design makes easier and faster verification for desired algorithms before modeling the hardware for sound localization.
School of Electrical Engineering and Computer Science, Kyungpook National University, Daegu, Korea
ABSTRACT
Segmentation of the phonocardiogram (PCG) by detecting its major sound components, S1 and S2, is generally the first step in the heart sound analysis for automated diagnosis of heart disorders. Most of heart sound segmentation algorithms are based on the energy of the signal that may show much variation depending on the subjects or auscultation positions. Recently, with the assumption that the human heart acts like a hidden dynamic system that undergoes state transitions to give rise to various heart sounds, a simplicity measure was proposed, which shows large amplitude in the regions where the major components of the PCG occur. The advantage of the simplicity measure is that it is insensitive to amplitude variations of the heart sound. So it seems very promising to detect S1 and S2 components for heart sound analysis. But we found that the simplicity measure does not give satisfactory results when either filtering is applied to the PCG for further processing or the signal-to-noise ratio of the PCG is high. In this paper we deal with a problem that we may encounter when we use the simplicity measure for the PCG analysis. We investigate the influence of the parameter values used in the simplicity analysis, and propose a method to overcome that problem. The proposed method and experimental results will be presented with our discussions.
Universal Media Research Center, NICT, Koganei, Tokyo, Japan
ABSTRACT
We have been investigating ultra-realistic communications techniques. If 3D video and audio realistically reproduced pops up in a 3D space by applying these techniques and several people can view the object anywhere in its vicinity without having to wear equipment such as glasses, more realistic forms of communication (e.g. 3D television, 3D teleconferencing, etc.) will be possible than those currently provided by conventional video and audio techniques (HD video and 5.1-channel audio). In order to realize the 3D radiated sound field that multiple listeners can listen to a sound anywhere around the object without having to wear equipment such as headphones, we proposed the near 3D sound field reproduction system using directional loudspeakers and wave field synthesis. However, the size of the loudspeaker array is same as that of the microphone array in the conventional system. Thus, when the size of the loudspeaker array is not same as that of the microphone array, the 3D radiated sound field captured by the microphone array cannot be accurately reproduced. If the boundary surface control technique is introduced into the conventional system, the 3D radiated sound field can be accurately reproduced even if the size of the loudspeaker is not same as that of the microphone array. In this paper, the mathematical derivation of the reproduction of the 3D radiated sound field is described and the 3D radiated sound field reproduction system using directional loudspeakers and boundary surface control is newly proposed.
(1) Department of Electronics, The University of Electro-Communications, Tokyo, Japan (2) Department of Information Systems, Daido University, Nagoya, Japan
ABSTRACT
This report proposes a new processing method for automatically detecting the states from the tire noise of passing ve-hicles. To detect tire noise, we use a commercially available microphone as an acoustic sensor, which enables us to easily reduce the cost and size in realizing a practical detection system. We propose several feature indicators in the frequency and time domains to successfully classify the states into four categories: snowy, slushy, wet, and dry states. The method is based on artificial neural networks. The proposed classification is carried out in multiple neural net-works using learning vector quantization. The outcomes of the networks are then integrated by the voting decision-making scheme. From experimental results obtained for more than a week in snowy areas, it has been demonstrated that an accuracy of approximately 90% can be attained for predicting road surface states.
Ritsumeikan University, Kusatsu, Kyoto, Japan
ABSTRACT
The loudspeaker is widely used for transmitting the sound wave. As the sound wave emitted by the loudspeaker is propagated in all directions, emitting the sound wave to only the specific area is difficult. The parametric loudspeaker which uses the ultrasound wave has been proposed to emit the sound wave to only the specific area by using non-linear distortion of the ultrasound. The sound wave emitted by the parametric loudspeaker has the sharp directivity. However, the sound wave is propagated to the undesired area by reflection. Therefore, suppression of the undesired reflection is crucial to emit the sound wave to only the specific area. The purpose of this study is to suppress the undesired reflection wave by using Active Noise Control (ANC) with the parametric loudspeaker. ANC is the method to suppress the sound pressure level of the noise by emitting a sound wave with the same amplitude but with the inverse phase by the noise-cancellation loudspeaker. In this study, the noise is defined as the first reflection wave because many reflection waves can be suppressed by suppressing the first reflection wave. However, the sound wave to suppress the noise is emitted in all directions, provided that the noise-cancellation loudspeaker is the normal loudspeaker. Therefore, the sound wave to suppress the noise must be emitted by the parametric loudspeaker to avoid the sound diffusion by using the normal loudspeaker.
The evaluation experiment was conducted to verify the effectiveness of the proposed method. The direct sound wave was emitted at angles of 15, 30, 45, 60 and 75 degrees. The evaluation index is the noise suppression level of the reflection wave. In this evaluation, the frequency band from 500 Hz to 2000 Hz was used to calculate the noise suppression level because suppression of the high frequency noise by ANC is difficult and the parametric loudspeaker can not emit the low frequency sound. The result of the evaluation experiment demonstrates that the effectiveness of the proposed method. Furthermore, this effectiveness was angle-independent.
Hanyang University, Seongdong-Ku, Seoul, Korea
ABSTRACT
In this paper, adaptive noise cancellation technique is applied for effective Active noise control. From an acoustic source including noise, it is removed by the technique. Active Noise Control is performed using the signal removed noise as reference signal. The control performances are compared in case using original acoustic signal and using noise removed signal.
Department of Mechanical Engineering, Center for Noise and Vibration Control (NOVIC), Korea Advanced Institute of Science and Technology, Yuseong-gu, Daejeon, Korea
ABSTRACT
Recently there are several studies for providing a private listening zone to users such as personal audio system and active head rest in aircraft. The objective of these researches is to make the louder sound at the zone comprising the listeners (acoustically bright zone), and reduce the sound elsewhere (acoustically dark zone). Acoustic contrast control can be a good way to achieve the objective because it maximizes the ratio of acoustic potential energy density between acoustically bright zone and dark zone. As an application of this research, we have attempted to generate a private listening zone in the case of a parasol table. More than one user is sitting around the table, and loudspeakers are arranged on the bottom side of parasol; acoustic parasol. Zone of interest is determined around the table. A circular array speaker which consists of loudspeakers is used for making ring-shaped bright zone except the table. Therefore, the dark zone is determined to the table and outside of the bright zone. In order to confirm performance of the acoustic parasol as several control variables, computer simulations are carried out. The results are compared with those of the experiments.
Institute of New Media and Communications, Department of Electrical Engineering, Seoul National University, Korea
ABSTRACT
Our research is to develop a signal processing process for separating musical audio signals into streams of individual sound sources. In this paper, we present a method that uses NMF method and some musical cues for musical source separation. The conventional separation method based on NMF method classify the separated note events into each stream manually', so the conventional methods are difficult to use in the real engineering. However, our method performs automatically' the classification process different from the conventional methods. The proposed process consists of the separation step and the reconstruction step. In the separation step, the audio stream is divided into "musical events," groups of the notes which have same frequency structures. The separation method is based on the Wang's method, which used Non-negative Matrix Factorization (NMF) method. In the reconstruction step, the divided note events are automatically grouped into streams of the individual sound sources using the musical cues. The proposed musical cues consist of the timbre features, the temporal features, and the pitch components. The proposed separation system is evaluated with some musical signals which contain the multi musical sources. The evaluation shows the proposed method can perform automatically' the separation using the proposed cues and have the same performance as manual classification.
Acoustic Lab, Physics Department, South China University of Technology, Guangzhou, P.R.China
ABSTRACT
In realtime rendering of a virtual auditory environment, multiple virtual sound images may be synthesized simultane-ously, which cost a lot of computation resource. The present work proposes a head-related transfer function (HRTF) model for fast synthesizing multiple virtual sound images. The head-relate impulse response (HRIR) of the KEMAR artificial head in horizontal plane is decomposed by using two-level wavelet packet. To simplify the model, for each wavelet packet tree node (subband), the beginning and ending parts of the coefficients, which are close to zeros, are discarded, while the main part of the coefficients, which contribute most to the HRIR energy, are preserved. The re-sults show that, when an appropriate wavelet function is selected, coeffients with only 25 samples are sufficient to re-construct the original HRIR.. The average error across all azimuthes caused by simplification is about 2.5% with a maximal error below 4%. The present HRTF model is very easy to implement by using wavelet filters and sparse fil-ters. Its computational load is M*S+W, where M is the number of the sound images, S and W are the computational load of the sparse filters and wavelet filters. The coefficients of sparse filters are the upsample (zeros insert) from the wavelet coefficients, hence the length of nonzero coefficients are much less than that of the original HRTF filter. This means that the present HRTF model can save much computational resource when M is large.
TU Dresden, Germany
ABSTRACT
Spatial reproduction in a conventional stereophonic audio system (e.g., stereo or 5.1 surround) works in a small area known as the "sweet spot". If the listener changes his position, the phantom source moves in the same direction and finally collapses into the nearer loudspeaker. A play-back system that adjusts the loudspeaker signals depending only on the listener's position in real-time was evaluated in a previous study. Additionally, the orientation of the head in relation to the loudspeaker setup has an influence on phantom source localization. Localization errors that occur when the head is turned are discussed in this article. For this purpose a binaural localization model is used. It shows that the auditory event moves towards the median plane of the listener. This effect becomes stronger as the original phantom source position deviates further from the median plane. A compensation function is proposed and evaluated. Stable phantom source localization can be achieved using adaptive signal adjustment depending on the listener position and orientation. A demo version can be downloaded at www.sweetspotter.de.
(1) Graduate School of Information Science, Nagoya University, Japan (2) Graduate School of Engineering, Mie University, Japan
ABSTRACT
A new method of visualizing characteristics of head-related transfer function (HRTF) is proposed. The proposed visualization method can illustrate the HRTFs and extras such as the reverberations separately. The HRTF is an acoustic transfer function between a sound source and the ear canal entrance, and is defined as a function on time and direction of sound source. Since the HRTF depends on sound source direction and subject, HRTFs are usually measured with a dummy head or a human. Measured HRTFs are generally visualized by a figure whose axes correspond to the angle of sound source and the temporal frequency. The conventional figure can illustrate the difference in HRTFs among the directions of sound source, and most previous works employed frequency analysis in the time domain emphasizing the time variation. In this paper, the measured HRTFs are analyzed with spatio-temporal frequency analysis, and examined the efficiency of the proposed visualization method. The spatio-temporal frequency analysis can visualize and analyze the characteristics of HRTFs by the spectrum calculated by two-dimensional Fourier transform in time and space. In our experiments, the theoretical property of the spatio-temporal frequency characteristic was investigated and the influence of reverberation in measurement environment was also examined. Moreover, the dereverberation method was proposed. From the results, the characteristics of HRTFs were mostly concentrated in a specific frequency band, and the proposed visualization method is efficient for illustrating the HRTF and extras such as reverberation and reflection waves. The dereverberation method decreased the average reverberation time from 384 to 282 ms in the best case.
Ritsumeikan University, Kusatsu, Kyoto, Japan
ABSTRACT
The environmental noise is recently one of the most major problems in urban cities. Several noise reduction methods for the environmental noise such as the room air conditioner have been proposed to maintain the comfortable environment. PNC (Passive Noise Control) and ANC (Active Noise Control) were used to overcome this problem. PNC is a noise reduction method by using sound insulation walls and is used in concert halls. Although PNC is useful to reduce the sound pressure level of the high frequency noise, the reduction of the low frequency noise is difficult because of the sound transmission loss in the wall. ANC is used to reduce the sound pressure level of the low frequency noise by using two microphones (Reference microphone and Error microphone) and one noise-cancellation loudspeaker. The noise-cancellation loudspeaker emits a sound wave with the same amplitude but with the inverse phase. Designing a filter is required to reduce the sound pressure level of the noise efficiently. The Multi-channel ANC system was proposed to enhance the noise cancellation performance. As the Multi-channel ANC system uses a number of microphones and loudspeakers, a number of filters are used to cancel the sound pressure level of the noise. The suitable filter lengths of all filters are determined by the transfer function from each reference microphone to the noise cancellation loudspeaker. Although the suitable filter lengths in the multi-channel ANC are different, the same length is generally used for filters.
In this paper, we try to determine the suitable length of all filters used for the multi-channel ANC. The suitable filter length is determined by iterative processing. First, the proposed method determines the filter length by a default value. Then, the determined filter length is evaluated by a threshold. The filter length is extended, provided that the filter length is not suitable by the evaluation based on the threshold. The suitable filter length is determined by iterating these two steps.
The objective experiment in a real environment was conducted to demonstrate the proposed method. As a result, we confirmed that the proposed method can determine the suitable lengths of all filters.
Ritsumeikan University, Kusatsu, Kyoto, Japan
ABSTRACT
The loudspeaker has been used to emit the acoustic sound such as the speech and the music. Emitting the sound to a particular area is difficult because the acoustic sound is emitted to all directions. On the other hand, at a museum, the emission to a particular area is required to transmit a person appreciating an art object to avoid the interference of the acoustic sound by reproducing several loudspeakers. The parametric loudspeaker with the sharper radiation is useful to emit the acoustic sound to a particular area. The acoustic sound reproduced by the parametric loudspeaker has higher directivity such as light beam "spotlight" and the emitted area is called "audio spot". Furthermore, the acoustic sound image is perceived on the wall area by reflecting the acoustic sound. We have tried to transmit the acoustic sound to a person by using the reflective sound. However, the emission to a particular area is difficult, provided that there is an unwanted object on the transmission path of the acoustic sound. In this paper, we propose two reflective objects to control the transmission path of the acoustic sound flexibly. One is the object to reflect the acoustic sound and the second is the object to diffuse the acoustic sound in the wide directions. The reflective object can transmit the acoustic sound by using other transmission paths even if there is an unwanted object on the transmission path of the acoustic sound. Since the diffusive object can diffuse the acoustic sound reproduced by the parametric loudspeaker as well as the loudspeaker, the listener can perceive the acoustic sound image at the position of the diffusive object. Subjective experiments were conducted to demonstrate the effectiveness of the proposed objects. The subjects were asked whether the perception of the acoustic sound image localization was possible by using the proposed objects. As a result of these experiments, we confirmed that the proposed objects can control the transmission path. Furthermore, the listener can perceive the acoustic audio image at the desired position.
Digital Signal Processing Laboratory, Nanyang Technological University, Singapore
ABSTRACT
Acoustic detection and localization of embedded sources (or targets) in the medium finds wide range of applications in medical imaging, nondestructive defect testing, as well as underwater object detection and classification. There are different techniques to detect and localize unknown sources (or targets) in the acoustic field. In this paper, we have conducted comprehensive reviews and intensive investigations of two commonly used and well-known methods in acoustic detection and localization.
The first method is the MUltiple-SIgnal-Classification (MUSIC) algorithm, which is a passive source detection method. This method can be used to detect and localize the incoming sources in the medium. In particular, the MUSIC algorithm utilizes the eigenvectors associated with small eigenvalues of the array covariance matrix to localize the direction of the source. The MUSIC can achieve super-resolution results, but it requires a large number of snapshots to estimate the array covariance matrix. Thus, the MUSIC requires high computational load and large memory size in data measurement. The MUSIC algorithm has a limitation of resolving highly correlated and closely spaced sources (i.e. coherent sources) in the medium.
The second method is using time-reversal technique with multiple signal classification (TR-MUSIC) algorithm, which is an active detection and localization method. Specifically, this method can detect the number of targets and localize the position of the targets in the medium. The TR-MUSIC algorithm employs multistatic data matrix (MDM), which requires only a single snapshot in measurement of reflected data. Thus, the TR-MUSIC algorithm can greatly reduce computational complexity in data measurement compared to the MUSIC algorithm. Moreover, the TR-MUSIC algorithm can be applied in multiple scattering case. Thus, the TR-MUSIC can achieve larger effective aperture size than the MUSIC, which is only limited in physical aperture size.
Hence, this paper has three main objectives. Firstly, there are some comparisons between MUSIC and TR-MUSIC algorithms in terms of data measurement as well as computational complexity. Secondly, the robustness of TR-MUSIC algorithm has been investigated in different scenarios in acoustic detection and localization. Finally, the TR-MUSIC algorithm has been also developed and studied in both single and multiple scattering cases.
(1) Czech Technical University in Prague, Czech Republic (2) National Taipei University of Technology, Taiwan (3) National Chiao Tung University, Taiwan
ABSTRACT
Several types of techniques exist for perceptual audio coding. Each of them has its strength and weakness. They often show their best performance only on the audio signals with specific properties. For example, the modified Discrete Cosine Transform (MDCT) adopted by most state-of-the-art perceptual coders works very well for coding the harmonic signals but its performance degrades significantly on the transient signals. This paper proposes an algorithm that decomposes the input audio signal into harmonic part, transient part, and noise-like part. Each part of the signal is then coded separately by using the most appropriate technique to increase the overall coding efficiency. The key components in our proposed scheme are the Short Time Fourier Transform and the Wavelet Packet decomposition.
(1) Research Institute of Electrical Communication, Tohoku University, Japan (2) Graduate School of Engineering, Tohoku University, Japan (3) Graduate School of Information Sciences, Tohoku University, Japan
ABSTRACT
Previous studies have often treated sound sources as ideal point sources, radiating sound waves in all directions equally--as having an omni-directional characteristic-- contrary to the circumstances in an actual environment. In fact, sound sources have directivity in radiation. Therefore, when we synthesize a high-definition three-dimensional sound field, it is important to consider the directivity of a sound source according to a listening point. Nevertheless, few reports have described estimation of such all-around directivity. For that reason, we propose a simple but novel method to estimate sound-source directivity as an impulse response in a reverberant environment. In this proposed method, the sound source position and the original sound source signal are first estimated using a surrounding microphone array. To realize our purpose, we apply the Single-Input Multiple-Output (SIMO) model considering the directivity of sound source. This model estimates the sound source signal from the received sound signals using a dereverberation technique; each "bare" impulse response, free from reverberation between the sound source position and each received point, is estimated by deconvolving each received signal to the estimated original sound source signal. We suppose that when the length of the response of the directivity is shorter than the time at which other reflective waves arrive, the early response of the estimated impulse response independently indicates directivity. In the surrounding microphone array we used, because the microphone is installed some distance from the wall, the minimum arrival path is not the case of oblique-incidence, as in the case of head-on incidence. Therefore, the directivity response can be extracted as the early response from the first response to the minimum arrival time interval between the direct sound and the reflected sound. The amplitude of each directivity response corresponding to the difference of each distance must be corrected to obtain the final result. To evaluate the proposed method's effectiveness, a computer simulation was performed using measured impulse responses. The original directivity of the loudspeaker was measured in the anechoic room and the actual impulse response was measured in the small room using the surrounding microphone array. Comparison among the original directivity response, that including the reflected sound, and the estimated one shows that the estimated response more closely resembles the original one than that including the reflected sound does, especially in high-frequency regions.
Dept. of Biophysical and Electronic Engineering (DIBE) - University of Genoa - Italy
ABSTRACT
Critical issues in the development of high-resolution 3-D sonar systems are the cost of hardware associated with the large number of sensors composing the planar array and the computational burden of processing in real-time the signals gathered. In this paper, such problems are overcome by the optimized synthesis of an aperiodic sparse array and the efficient processing of the acquired signals carried out in the frequency domain and based on Chirp Zeta Transform (CZT) beamforming. On the one hand, the synthesized sparse array enables the device to operate at different frequencies yielding an acceptable side-lobe level and a good tradeoff between the sector of view and the resolution. On the other hand, the CZT beamforming, specifically devised to cope with the requirements of volumetric sonar imaging, allows the processing of wideband signals collected by a planar array and generated by a scene encompassing both near-field and far-field regions. The combination of a very limited number of sensors with the CZT beamforming generates a computational load that is two orders of magnitude lower than that of the delay-and-sum beamforming and one order of magnitude lower than that of the traditional frequency-domain implementation. The reduction of the number of sensors and the computational load produces, in turn, a noticeable reduction of the hardware cost.
Chungju National University, Chungju, Korea
ABSTRACT
Conventional audio amplifiers amplify signals from compact disks, digital audio tapes and digital audio signal processors which have been converted to analog signals. This audio amplifier due to the large volume and high power consumption is not suitable for some mobile IT devices, such as PMP, MP3 and notebook PC. Nowadays, some portable electronics, some home audio/video devices and more and more car audio system are changing to the application of class-D audio amplifier. Because class-D audio amplifier has various advantage of digital circuit design. This amplifier is shown power efficiency of 100% theoretically, super-duper characteristic of frequency response and very low distortion factor. First of all, its design is possible with compact size so that is suitable to the recent potable IT devices. When applies Li-Polymer secondary battery in like this compact design, its mobility will be able to increase on a large scale. In addition, optimum design for class-D audio amplifier becomes more valuable and useful if applies wireless communication function by Bluetooth chip. In this paper, a design and implementation method for wireless class-D digital stereo amplifier system is presented.
(1) Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing, P.R.China (2) College of Information and Electrical Engineering, Shandong University of Science and Technology, Qingdao, P.R.China
ABSTRACT
A kind of wide-spread sensor network array consisting of broadband infrasound microphones (0.001Hz~20Hz) which was used by observing and studying low-frequency atmospheric infrasonic waves was provided. Each unit of this network array was distributed in a large area, and could monitor the local infrasonic waves effectively. According to continuous observations and statistic analyses, it could be hard to analyzing and acquiring the exact information of infrasonic waves source from the random infrasonic array signals observed, for the reason that disturbance caused by influence factors, such as wind, temperature, turbulent flow and so on, which existed in atmosphere and in the long distance propagation paths of infrasonic waves which comes from some places of hundreds of kilometers away. Therefore, a method of Rainbow graph based on the acoustic imaging analyzing of array signals observed by the wide-spread sensor network array, which could indicate the amplitude and frequency of infrasonic waves varied in its propagation paths by the changes of colors and luminance in the Rainbow graph, was provided. Propagation laws of infrasonic waves and its sources information could be studied by this method. The wide-spread sensor network array and Rainbow graph method provided in this paper may propel the study of atmospheric infrasonic waves and its propagation laws.
Pacific National University, Russia
ABSTRACT
Variations in acoustic parameters of the propagation velocity C and the attenuation coefficient α are investigated as a function of the cure time in polymerization for samples on the basis of ED-20 resin with different content of polyethylene polyamine as a hardener at constant temperature. Temperature profiles of acoustic characteristics for pure epoxy oligomer ED-20 which undergoes a liquid-to-solid transition in cooling ( temperature vitrification) are studied. A comparison is made with the dynamics of electric and mechanical properties. The procedure for determining dispersion characteristics of the velocity C and the attenuation coefficient α of ultrasound at a considerable signal distortion is proposed. Frequency dependencies of C and α were determined on digital oscillograms obtained in a pulsed mode. In this case prominent on oscillograms were the first in-coming ultrasonic signal U1 and the second rereflected U2. There were superpositions on U2 of rereflected components. To reduce that influence in determining С and α, the following procedure was used: 1. Time domains of pulses U1 and U2 under investigation were determined; 2. To compensate for the superposition for U2 an extra signal was introduced; 3. For the "cutting out" of U1 and U2 pulses the function F(t) on the basis of the Heaviside function was used, which "cut out" the range under consideration with no dramatic distortions of phase frequency and amplitude frequency characteristics for the signal; 4. The values of the attenuation coefficient α and the velocity C were further calculated as a function of frequency.
Our technique was tested on steel20 and motor oil 5W-30. For the temperature range of - 20 to +80 close agreement was obtained with known dependencies of dispersion characteristics for metals and liquids. Frequency dependencies of С(ω) and α(ω) parameters that were obtained with our technique and with the use of the arrival time and the amplitude measured for ultrasonic pulses and also with the direct Fourier transform for the signal were compared. The dynamics of polymerization was also studied with our technique. The technique proposed provides dependencies of С(ω) and α(ω) as a function of temperature and the cure time at transitions from high-viscous liquid to solid states in polymerization and vitrification for various samples. A number of features have been highlighted for the liquid-to-solid transitions, which have not been established with treatment of the same data according to traditional technique.
(1) Research Institute of Electrical, Communication and Graduate School of Information Sciences, Tohoku University, Japan (2) Department of Design and Computer Applications, Sendai National COllege of Technology, Japan (3) Graduate School of Engineering, Tohoku University, Japan
ABSTRACT
Sensing, transmission, and reproduction of precise 3D sound-space information are important to realize communications with high-definition qualities. Although reports in the literature describing reproduction technologies are numerous, those related to sensing technologies are far fewer. Our proposed system acquires 3D sound space information and transmits accurate sound space information to a distant place using a microphone array on a human-head-sized solid sphere with numerous microphones on its surface. The object and its numerous microphones are called a Symmetrical object with ENchased ZIllion microphones (SENZI). Each recorded signal from each microphone is simply weighted by the coefficient that is optimized for each listener in the frequency domain. These calculated signals are summed to synthesize a set of the listener's HRTFs using input signals from multiple spatially distributed microphones. Synthesized signals are presented binaurally via headphones. Moreover, the set of weighting coefficients is changed according to a human's 3D head movement (yaw, pitch, and roll) on real-time operation. Therefore, the 3D sound space information is acquired accurately, irrespective of the head movement. Results of a computer simulation done for a previous study indicate that 1) the microphones should be arranged densely to avoid effects of spatial aliasing, 2) the number of controlled directions should be set densely at less than 5-deg intervals to synthesize sound sources from all directions, and 3) the microphone array radius should be the same as the size of a listener's head to express details of the listener's frequency and phase characteristics. Based on these examinations, we developed a system with 252 ch microphones and investigated the accuracy of the synthesized 3D sound space information. The results demonstrate that, from the sensed signals, the system can synthesize the 3D sound space for a specific listener with high precision.
Institute of Technical Acoustics, RWTH Aachen University, Germany
ABSTRACT
This paper presents a method, which is able to give a blind estimation of the reverberation time of an enclosed space, using only the signal from two or more spatially distributed receivers, for example a binaural signal. There is no need for a controlled or known excitation signal, and there are no special requirements for the excitation signal. The method works with any kind of acoustic source, such as a speaking person, a musical instrument or a noise source. The indicator used for the reverberation time estimation is the spatial coherence and the coherence's dependency on the block size used for the coherence calculation. Using a neural network as estimator, a unique dependency between the block size dependent spatial coherence and the reverberation time could be verified and used for reverberation time estimation.
Radio Application Division, NEC Corporation, Tokyo, Japan
ABSTRACT
In the multi-path environment like applying sonars in shallow water, time reversal methods are proven to be effective for target detections with back scattering configurations. The time reversal methods possess an advantage that they dose not require propagation route structure in advance. However, the methods are hard to indicate target distances in the highly noisy or highly reverberation conditions because it is difficult to distinguish signal peaks from noise. If the methods estimate target direction, it is able to obtain target position by multiple sonars even in such conditions. The target direction accuracy is enhanced by signal integration along each direction. Target positions are estimated by detecting a crossing point of target direction lines which are results of multiple sonar time reversal processing and drawn form each sonar. More than three sonars are required and they should not be arranged on one line, since it is impossible to estimate a target position if the target is on the same line with sonars. The above method implies that each sonar receives signals transmitted by themselves. As a next step the method is able to be extended to receive signals from other sonars for signal enhancements. That procedure seems like multistatic sonar systems, so, the extension is called multistatic time reversal'. Multstatic time reversal sonars are more robust than traditional time reversals in the low S/N or low S/R conditions by integrating other sonar signals. In the new method, transmission timings and wave forms should be carefully considered for planning a detection coverage area and suppressing interference among sonars. Time reversal methods are aggregating signals scattered in time. One the other hand, the multistatic time reversal method aggregates signals scattered not only in time but in space. It will be one of the most influential tools for multi-path investigations in a variety of fields.
Tomsk Polytechnic University, Tomsk, Russia
ABSTRACT
Ultrasound wave propagation in bounded media leads to changing of the echo-pulse shape, so the use of traditional method of determining of the propagation time of ultrasound pulse in controlled media with the use of threshold device (comparator) gives the grater error. To reduce this error new methods of the determining of echo-pulse propagation time are proposed in this paper. The first method is a method based on the use of two comparators, the second one is a method based on second-order polynomial approximation of the echo-pulse envelope. Analysis showed the more the slew rate of echo-pulse envelope rise the less the error of start pulse determining. The more the difference between comparator thresholds the less the error of start echo-pulse determining. Application of the method of two comparators gives two times regular error reducing. Minimal sampling of input signal is determined for the method based on echo-pulse envelope. Dependence of the error at start echo-pulse determining on adjacent samples amplitude ratio is obtained. Application of echo-pulse envelope gives three times regular error reducing.
(1) Graduate School of Culture Technology, Korea Advanced Institute of Science and Technology, Science Town Daejeon, Korea (2) Department of Mechanical Engineering, Center for Noise and Vibration Control, Korea Advanced Institute of Science and Technology, Science Town Daejeon, Korea
ABSTRACT
It is well known that the problem of generating sound in the region of interest, by using finite number of speakers, is mathematically ill-posed. With additional constraints and suitable object function given, this problem can be regarded to be well-posed. In other words, the way to drive the speakers to make desired sound field in the prescribed zone can be directly determined. This problem is called the sound manipulation problem. It is noteworthy, however, that the arrangement and radiation characteristics of speakers have to be assumed to be known or predetermined. With this assumption, we can manipulate a desired sound field in a selected zone. Mathematically, this problem is a function reconstruction problem in finite dimension function space. Arrangement and radiation characteristics of speakers (i.e. transfer functions) form a basis set of the function space. In this paper, relation of transfer function set and sound manipulation performance is introduced for two kinds of cases. One is pressure field reconstruction and the other one is energy density focusing. The mathematical derivation shows that in order to generate desired field, the transfer function characteristic should be different for these two cases. This result will be interpreted with physical meaning and a novel way to generate the desired sound field in space will be introduced.
Ritsumeikan University, Kusatsu, Kyoto, Japan
ABSTRACT
The acoustic sound which is emitted with cone type loudspeaker should widely spread and be surely transmitted until listeners. Thus, it may be also transmitted to undesired area for non-listeners. The non-listeners in undesired area may perceive non-required sound as noise. On the other hand, the simultaneous emission of different acoustic sounds with multiple loudspeakers may be perceived as noise even the listeners. To overcome these problems, the parametric loudspeaker with the sharper radiation characteristic is proposed. The parametric loudspeaker which utilizes the ultrasound can form “audio spot” and can only emit the acoustic sound to particular area. In recently, they have focused on the “reflective audio spot” which is formed based on reflection signal with the parametric loudspeaker. The listeners may localize the acoustic sound image on the reflector. If we can freely control the acoustic sound image, “reflective audio spot” should generally diffuse. To cope with this problem, we propose to steer the distance perception based on the acoustic sound image control for reflective audio spot with parametric loudspeaker array. We especially try to steer the distance perception between the reflector and the listener by designing the focal and null points based on adaptive digital filter. The MINT (Multi input/output INverce Theorem) is utilized to design the focal and null points with the parametric loudspeaker array.
In this paper, we firstly try to steer the distance perception with three focal and null points which are located between the reflector and the listener. We thus design the adaptive filters based on MINT for each parametric loudspeaker. The propose system should adaptively select the optimum filters based on the distance perception. We carried out objective and subjective evaluation experiments in soundproof room. As a result of objective evaluation, we confirmed that three focal and null points are accurately designed with proposed system. In addition, as a result of subjective evaluation, we also confirmed that the subjects in 150 cm distance from reflector can perceive the difference distance on each designed acoustic sound images. We therefore confirmed that the proposed system can steer the distance perception for “reflective audio spot”. We however confirmed that no subjects in 225 cm distance from reflector can perceive that one. Thus in the future, we will try to clearly expanse the acoustic images for steering the distance perception with more parametric loudspeakers.
Faculty of Systems Science and Technology, Akita Prefectural University, Japan
ABSTRACT
One of the purposes for sound emission in public space is to transfer the information involved in it. Since sound wave with the audible frequency has its wavelength comparable to the objects around us, it is difficult to avoid its propagation where it is not required, due to diffraction and reflection. If the information in sound can be conveyed at the desired local spot in the sound field, the communication with sound yields new property beyond its physical limitation. Although the parametric loudspeaker based on ultrasound is useful in order to fulfil such need, it can limit the "direction" of sound propagation, not the local "spot." In this paper, another approach for the reproduction of speech signal at local spotmis introduced. It is based on signal decomposition into the orthogonal basis function made from the random vectors. This approach was applied to the transaural system by Negi et al. It has some difficulties, however, in the reproduction of speech signal at the local spot. One of them is that the contents of speech can be heard from the synthesized signal at the point except the desired spot, although its quality is degraded due to its decomposition into random signals. As far as the target of our study is focused on the reproduction of speech signals, location of the sound sources, by which the decomposed random signals are emitted, is related to the difficulty in understanding of the contents of speech. The performance is not appropriate when the sound sources are located at the same distance from the desired spot. The contents of the synthesized speech can be heard at the point around the desired spot in this case. Location of the sound sources with their distance from the spot distributed has potential to improve the performance. In this paper, the relation between some sound source locations and the synthesized signals based on its decomposition into random signals is discussed via computer simulation, and the synthesized speech signals are demonstrated and evaluated with a few measures.
User Interface Laboratory, KDDI R&D Laboratories Inc., Japan
ABSTRACT
This paper presents a novel pointing system for large displays based on three-dimensional ultrasonic positioning technology. This system consists of a display or a projector screen, a pointing device mounting two microphones on the axis of the pointing direction, and three loudspeakers set around the display. The three-dimensional position of each microphone is estimated by trilateration with three distances from the loudspeakers to the microphone, and the pointer is indicated at the intersection of a straight line connecting two three-dimensional positions of the microphones, and the display plane. This system is targeted to interact with large displays such as digital signages. In the experiment with a prototype, we use source signals of a band-limited Gaussian noise from 18 to 24 kHz, which are reproducible by normal audio-visual equipment. The estimation error of the microphone position has a standard deviation of 17 mm, which is equivalent to the error of common positioning systems. The accuracy of the pointer was measured as an angle error below 4 degrees, which is comparable to jiggle of hand, for 95% of frames when the microphones are mounted on the pointing device with an interval of 0.15 m. The pointer is displayed at 15 Hz frame rate with a latency of 31 ms.
Department of Building Services Engineering, The Hong Kong Polytechnic University, P.R.China
ABSTRACT
A new parameter estimation method for a point moving harmonic source with unknown moving velocity and frequency is presented in this paper. The time-frequency representation of the source signal is taking the place of traditional time correlation estimation methods. For a harmonic source moving at constant velocity, the received signal which is amplitude and frequency modulated has no spatial correlation between microphone pairs in time or frequency domain which make the estimation problem become complicated. Besides, it is observed that the Doppler shifted frequency of the signal correlate well between spatial distributed microphone signals. Moreover, the second time derivative of the Doppler shifted frequency gives the reference time for the estimation of the source parameters in time domain. In this paper, the algorithm for estimation based on time-frequency transformation is presented. The adaptability of short-time Fourier transform (STFT), filter short-time Fourier transform (FSTFT) and Polynomial time-serious (PTS) method are also illustrated. In addition, the performance of the analyzers on difference source velocity, frequency and signal to noise ratio in computer simulations is presented. The results demonstrate the validity of the proposed method to give a rigid estimation in constant speed moving source. This paper concludes with further investigation and discussion about the proposed method.
Faculty of Engineering, Shizuoka University, Japan
ABSTRACT
This paper proposes a reference signal extraction method for a semi-adaptive sound reproduction system under a noisy environment. In order to realize a sound reproduction system with several loudspeakers, inverse filters are designed and used to cancel the effect of room transfer functions (RTFs) . However, RTFs are not invariant and vary with the environmental conditions, such as temperature. Therefore, in sound reproduction systems that use fixed inverse filters, reproduction accuracy is degraded by fluctuations in the environmental. We previously proposed a semi-adaptive sound reproduction system that compensates for environmental fluctuations, such as temperature fluctuations, and an inverse filter relaxation algorithm in order to maintain the quality of the reproduced sound. In this system, one monitoring microphone can be placed at a location that does not restrict the listener, and the inverse filters can be updated by the signal observed at the monitoring microphone as a reference signal.
However, since the environment normally contains several noise sources, it is difficult to observe only the reproduced signal by the conventional system. To resolve this problem, we herein propose a method for observing only the reproduced signal in a noisy environment using the semi-adaptive sound reproduction system. We introduce semi-blind source separation (semi-BSS) based on frequency domain independent component analysis (FDICA) to the semi-adaptive sound reproduction system for extraction of the reproduced signal from the noisy observed signal. First, we obtain the noise signal from the noisy observed signal by semi-BSS. Next, the estimated reference signal is obtained by subtracting the noise signal from the observed signal. In a simulation using real environmental data, the proposed method can extract the reference signal with a high signal-to-deviation ratio (SDR) when RTFs were changed by temperature fluctuation.
University of Applied Sciences, Hamburg, Germany
ABSTRACT
In this paper, a semi-virtual violin is presented which has been developed in the context of a research project on desirable violin sound properties. Since the timbre and the reverberation characteristics of a violin are primarily determined by the nature of the resonance body, the main component of the platform is a modifiable virtual body. The method used here focuses on the musicians' perception of spectral components rather than on physical modeling properties. A silent violin which has been designed with particular emphasis on authentic haptic and virtual properties is used as interface between musician and virtual body. Binaural transfer functions of real violins measured at the violinist's hearing position serve as initial sound references to start from for further spectral modifications. A specific filtering technique enables highly-detailed modifications in the frequency domain, changing individual resonances or resonance areas while leaving other resonances unaffected. Implementation on an external signal processor provides for real-time sound processing. Due to an overall system latency of less than 5 ms, the platform allows for experiments on perceived sound properties and human-instrument interaction together with musicians. An example of application is given: The presented tool is used, inter alia, to manipulate the vowel quality in violin tones by specifically changing formant properties, since, in concurrent research work, the authors seek for a relationship between perceptible vowel properties in violin tones and the quality of instruments.
(1) George Mason Univ., Fairfax, VA, USA (2) Univ. of Massachusetts Dartmouth, N. Dartmouth, MA, USA
ABSTRACT
Dominant Mode Rejection (DMR) adaptive beamforming replaces the covariance matrix for the Minimum Variance Distortionless Response (MVDR) beamformer with a modified sample covariance matrix (SCM). DMR modifies the SCM by first segmenting the eigenvalues into the signal (large eigenvalues) and noise (small eigenvalues) subspaces. The modified SCM uses the large signal eigenvalues but replaces the small noise eigenvalues with the average of these noise eigenvalues. The performance of the DMR beamformer in practical scenarios depends on the quality of the estimates of the rank of the signal subspace, as well as the quality of the estimated signal eigenvalues and associated eigenvectors. Therefore, an important challenge in practical applications of DMR is correctly estimating the rank of the signal subspace. Nadakuditi and Edelman recently developed an extension of the Akaike Information Criteria (AIC) for estimating the number of high dimensional signals from a relatively small number of observations exploiting results from infinite random matrix theory. The accuracy of the new Nadakuditi & Edelman AIC (N/E AIC) in estimating the dominant subspace rank was compared with the traditional AIC and Minimum Description Length (MDL) techniques. These simulations examined uniform linear arrays with one signal and varying numbers of array elements, snapshots and signal-to-noise ratios (SNRs). The N/E AIC performed better than the traditional AIC and MDL approaches in achieving a higher probability of correct rank estimation at a lower SNR in each case evaluated. Additionally, the N/E AIC performs well even in snapshot deficient cases where there are fewer snapshots than sensors. Both the standard AIC and MDL fail in snapshot deficient cases. The N/E AIC performance was also evaluated in simulations including a loud interfering source (+40 dB) and a relatively quiet source (-10 dB below the noise floor) observed by a uniform linear array with half-wavelength sampling over a range of array apertures and numbers of snapshots. The observed Signal to Interferer and Noise Ratio (SINR) for the standard DMR with N/E AIC suffered from a substantial degradation due to mismatch as the number of array elements grew. When the DMR algorithm was modified to incorporate the Cox/Pitre robust DMR method as well as the N/E AIC, the SINR closely tracked the performance of the omniscient beamformer with prior knowledge of the signal subspace rank.
(1) Research Institute of Electrical Communication, Sendai, Miyagi, Japan (2) Graduate School of Information Sciences, Tohoku University, Japan (3) Graduate School of Engineering, Tohoku University, Sendai, Miyagi, Japan
ABSTRACT
In audio communications over networks with packet losses, such as those relying on UDP and RTP, the quality of the decoded signal may be severely deteriorated. To cope with this issue, packet loss concealment (PLC) techniques are highly needed. However, PLC methods are subject to a trade-off between the amount of transmitted information and the quality of concealment. The Multiple Description Coding (MDC) paradigm has recently attracted the attention of researchers. The concept of MDC is as follows: (1) The original media data are divided into multiple subsets, each of which is called a description. (2) Each description is sufficient to approximate the original, and (3) if all the descriptions are received, the reconstructed media are perceived to be identical or approximately the same as the original. MDC can flexibly balance the concealment quality and the bitrate, without the need for retransmission of lost packets. MDC techniques for wideband audio codecs such as MP3 have not been extensively studied. In such techniques, the perceptual sound quality attainable even by the reception of all descriptions has remained insufficient for practical use.
Herein, a new MDC method applicable to the streaming of MPEG-1 audio layer III (MP3) coded sound signals is proposed. The presented MDC method is based on a time-domain sub-sampling process. More specifically, input signals are first split into two streams, one consisting of the odd-numbered samples and the other of the even-numbered ones. Each sub-sampled stream is encoded using an MP3 codec, which enables the reduction of the packet data size while preserving the perceived sound quality for single descriptions. In this basic method, the sound quality deteriorates when both descriptions are used due to the aliasing of the quantization noise introduced by the MP3 codec. This causes spectral distortions at high frequencies. Therefore, to enhance the sound quality of the decoded signal, we introduced a Wiener filter, designed by exploiting the independent nature of high frequency band aliasing. Experiments were conducted to compare the proposed method with several conventional PLC techniques, including the above-mentioned basic MDC. Results confirm that the proposed method outperforms previous ones for a range of bitrates from 128 to 224 kbps when the random packet loss rate is between 5 and 10 percent.
School of Electrical Engineering and Computer Science, Kyungpook National University, Daegu, Korea
ABSTRACT
Auscultation is the most widely used method for diagnosis of heart diseases resulting from heart valve abnormalities. Since the auscultation based method relies on the human perceptions and interpretation of heart conditions by physicians, it needs much experience of the physicians and the diagnostic result is very subjective. Therefore, it would be necessary to develop an automated heart sound classification system that can provide more objective diagnostic result. Since the heart sound signal has varying amplitude and frequency characteristics from one cardiac cycle to the other and across different pathological subjects, in this paper, we model the heart sound with the HMM. The HMM is well-known pattern recognition method that models quite well the time varying and non-stationary signals such as speech, and widely used in the speech recognition research. Considering that auscultation is carried out by listening of heart sounds by physicians, we choose the mel-frequency cepstral coefficient (MFCC) that is positively used in the speech recognition as the feature vector for classification of heart sounds. We use totally 275 manually segmented heart sound cycles belonging to 6 heart disease classes, namely, NS (normal heart sound), AS (aortic stenosis), MR (mitral regurgitation), AR (aortic regurgitation), MS (mitral stenosis), TR (tricuspid regurgitation). The heart sound signals are taken from the clinical training audio CDs for the physicians and then re-sampled to 16kHz with a 16-bit resolution. The 13-dimension MFCCs are extracted as features for classification. We investigate the influence of the parameter values such as analysis frame size and frame rate in MFCC feature extraction, the number of HMM states, the number of Gaussian mixtures in each state of the HMM to the classification performance. To overcome the data insufficiency problem, we carry out a 10-fold cross validation method for training and test. Experimental results are presented in detail with our discussions.
The Australian National University, Canberra, Australia
ABSTRACT
Multizone soundfield reproduction with various potential applications has recently drawn attentions in acoustic signal processing. In this paper, we seek to recreate two or more distinct 2D soundfields simultaneously at different spatial regions using multiple loudspeaker arrays. The basic ideas from the cross-talk cancelation systems were applied to determined the loudspeaker weights by the Least Squares method. Simulation results demonstrate favorable performance.
Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing, P.R.China
ABSTRACT
Broadband active noise equalization algorithm is utilized to shape the noise spectrum in order to match human preference. The stable condition of the algorithm is studied in this paper. Analysis shows that the phase shift of the shaping filter, whose magnitude response defines the desired noise spectrum, has a significant effect on the system stability. The stable range of the secondary path modeling phase error is larger than 180 Degree if the phase shift of the shaping filter is between -90 Degree and 90 Degree, and smaller than 180 Degree if it is out of the range. Such conclusion suggests that a shaping filter with a phase response between -90 Degree and 90 Degree can achieve the desired noise spectrum with better stability. Simulations are presented to validate the conclusions.
Institute of Acoustics, Chinese Academy of Sciences, Beijing, P.R.China
ABSTRACT
Spherical Harmonics Domain (SHD) beamforming technology has recently become an important research issue in three-dimensional (3D) sound reception, sound field analysis for room acoustics, direction of arrival (DOA) estimation, and so on. Most of the existing SHD beamformers are implemented in the frequency domain, where the discrete Fourier transform and complex-valued signal processing are required. The frequency-domain implementation is not suitable for some applications due to its associated time delay. In this paper, an approach to real-valued time-domain implementation of SHD beamformer for spherical microphone arrays is proposed. The advantage of the time-domain implementation is that we can update the beamformer when each new snapshot arrives. Our Technique is based on a modified filter-and-sum modal beamforming structure. The time series received at the microphones are converted into SHD data using spherical Fourier transform. The SHD data input to the steering unit and then feed a bank of finite impulse response (FIR) filters. The filter outputs are summed to produce the beamformer output time series. The FIR filters tap weights are optimally designed by making a compromise among multiple conflicting array performance measures such as directivity, mainlobe spatial response variation (MSRV), sidelobe level, and robustness. The design problem is formulated as a multiply constrained problem which is solved using second-order cone programming (SOCP). Results of simulations and experimental data processing show good performance of the proposed time-domain SHD beamformer design approach.
(1) Doshisha Universoty, Japan (2) Osaka University, Japan
ABSTRACT
ICA is usually given as a means for blind source separation, or calculating the separation matrix so as to make the independency among separated source components to be maximized. As mutual independency cannot determine the ordering among separated sources nor fix the magnitude balance between the mixing matrix and source signals. The former inevitably causes the permutation problem, while the latter, the scaling problem. The indeterminacy of the scaling factor allows us introducing equi-variance assumption on the autocorrelation matrix of the source signals besides its diagonality. Whitening the autocorrelation of source signals leads a conceptual explicit expression of the separation matrix in ICA including over-determined cases as follows:
Let’s express the observation process as x=As, where x denotes an m dimensional vector representing observed signals, s denotes an n (
In case of m= n, WI = (AΨΣ1/2)-1 and WG = A-1, so we obtain a simple relation WI =Σ-1/2Ψ-1 WG,.
Although the formulation given above is valid only for instantaneous mixture, formulation for convolutive cases can be easily derived by introducing a concept of matrix convolution.
Key Laboratory of Noise and Vibration, Institute of Acoustics, Chinese Academy of Sciences, P.R.China
ABSTRACT
This paper proposes a family of new robust adaptive filtering algorithms for stereophonic acoustic echo cancellation in impulsive noise environment. The new algorithms employ sequential partial update scheme to reduce computa-tional complexity, which is desirable in long echo path case. On the other hand, by employing robust M-estimate technique, the new algorithms become more robust to impulsive noises compared to their conventional least square-based counterparts. These two advantages enable the proposed algorithms to be good alternatives for stereophonic echo cancellation. Experiments are also conducted to verify their efficiency.
Akita Prefectural University, Japan
ABSTRACT
The precedence effect is regarded as one of the perceptual phenomena produced by multiple sound sources. This effect is also referred to as the law of the first wave front, which gives the human sound localization at the position of the original sound source located in the ordinary room with many reflections. Many researches on this phenomenon exist in the case of the sound sources located on the horizontal plane. Little is known, however, about the behavior of the precedence effect with the location of sound sources located not on the horizontal plane. In order to actively apply the precedence effect to the sound reinforcement system without significant change to the sound localization due to the location of the additional sound sources, the present findings may be inadequate and the behavior of the precedence effect must be investigated for the various location of the sound sources. The purpose of our study is to clarify the behavior of the precedence effect generated by the sound sources arranged in three-dimensional space. In this report, the main sound source was located in front, and the second (or sub) sound source was located on the mid-coronal plane. Seven experimental conditions were set with the position of the sub sound source changed (0, ±30º, ±60º, ±90º, 0º defined as the vertex of the subjects' head). As a result, it was found that the behavior of the precedence effect is clearly different due to the position of the sub sound source, i.e. the shift of a fused sound image toward the direction of the sub sound source by the precedence effect becomes small as the directional angle in of the sub sound source the mid-coronal plane decreases.
Research School of Biology, Australian National University, Canberra, ACT, Australia
ABSTRACT
There have been two main theories of how the cochlea works: resonance and travelling wave. The first says the cochlea comprises a bank of tiny resonating elements, like piano strings, which respond directly to sound pressure (the excitation is in parallel to the elements). The second considers that differential pressure across the basilar membrane causes a hydrodynamically coupled wave to propagate, like a ripple on a pond, from base to apex (i.e., the excitation is in series). Yet a graded bank of independent resonating elements, if simultaneously excited, will give rise to an apparent travelling wave, as each element builds up and decays, governed by its Q. Here we model a bank of resonators ranging from 1 to 10 kHz and possessing Q values from 12 to 25, in line with reported values and in accord with a recent surface acoustic wave (SAW) model of the cochlea. When simultaneously excited, the bank shows an apparent travelling wave moving from base to apex with a speed of several metres per second, a value similar to experiment. We conclude that the ‘travelling wave’ can be interpreted as arising from resonant activity.
(1) LAUM - UMR CNRS 6613, Le Mans, France (2) Neurelec, Vallauris, France
ABSTRACT
Cochlear implant users would like to get a stealth hearing aid with no external microphone. Moreover, it is not really convenient to have to remove this microphone when washing, sleeping, and so on : a sub-cutaneaous one would be much more appropriate. This paper aims at providing the analytical modelling of such receiver, i.e. an implantable, biocompatible microphone cartridge, in the frequency range of interest (70 Hz, 10 kHz). This modelling includes the strong coupling between several domains, respectively: the skin considered as a membrane back-loaded by an air layer, a titanium membrane, an air gap behind the membrane, a hole (considered as a "sink" in the propagation equations) facing a small tube at the periphery of this air gap and, at the output of the tube, a small backing cavity including a commercial microphone. This analytical model leads to an expression of the acoustic transfer function. It relies on the Poiseuille law to account for the effects of the viscous boundary layers (which assume non-slip conditions on the walls) and relies even on the Fourier's law to account for the thermal boundary effects (which assume isothermal boundary conditions). Actually, these thermoviscous phenomena play important roles, especially inside the air gap and inside the tube set between this air gap and the small backing cavity.
Expansions on Dirichlet and Neumann eigenmodes (Fourier and Bessel-Fourier series) are used to solve the whole problem, leading to solutions valid up to 100 kHz, which is of interest for other kinds of microphones in current applications (especially for metrology). An approximation of these expansions, where all the modes are neglected except the first one, leads to a generalised lumped electric circuit. Owing to the expressions of the lumped elements and the structure of the equivalent circuit are obtained analytically, this last approach may be useful for describing correctly the device in the frequency range considered for the hearing aid application. A lowest order approximation, valid in the lowest frequency range, can be derived when assuming first order approximations for the expressions of the lumped elements. These results enable scaling the parameters of the receiving transducer in order to meet given acoustic specifications for the implanted microphone.
(1) Institute of Technical Acoustics, RWTH Aachen University, Germany (2) Audiological Research, Widex A/S, Denmark
ABSTRACT
Couplers are used for the development and measurement of hearing aids. Couplers for children or babies are, however, not available yet due to a lack of knowledge about the correct data. In order to obtain data that is needed to design suitable couplers for children and babies, the input impedance of ear canals is important. Recent studies have shown that the ear canal impedances of children under six years of age differ a lot from typical adult impedances. Additionally the impedances of newborn infants up to six years old children vary tremendously. However, the most important age group is children younger than 2-3 years of age, since those impedances cannot be replicated by adult data.
Hence an appropriate coupler is needed as nowadays, more and more hearing aids are prescribed and fitted for small children (aged 6 months and older). There are various possibilities to measure the input impedance on real ears, but in all cases the dimensions of the probe are limited due to the small ear canal entrances of the children.
When using impedance probes with a small diameter the measurement results will be affected by acoustic losses in the small tube due to temperature, viscosity and density. The measurement is also affected by the coupling of the small tube to the ear and the resulting radiation effect at the cross-sectional jump and the position of the loudspeaker and microphone. By using Finite-Element-Simulations the acoustic losses can be simulated as well and the simulation reveals the critical areas of the measurement system as well. This contribution deals with these problems and some approaches for measurements using very small impedance probes.
Acoustics, Aalborg University, Denmark
ABSTRACT
Otoacoustic emissions (OAEs) are weak sounds that can be recorded in the external ear. They are generated by the active amplification of the outer hair cells, and are by many believed to reflect the status of the most vulnerable part of the hearing better than ordinary behavioral thresholds. Distortion product OAEs (DPOAES) are generated in response to a two-tone external stimulus with frequencies f1 and f2. One of the strongest DPOAEs is the component at 2f1-f2. This component is elicited on the basilar membrane in the overlap region of the f1 and f2, close to the f2 place (depending on frequency ratio and levels of the two tone stimulus). The 2f1-f2 component travels along the basilar membrane and excites activity at the 2f1-f2 place, which yields a second component with the same frequency. The resulting sound that can be recorded in the external ear canal is the superposition of these two components. The result is characterized by a distinct fine structure pattern, and generally doesn't directly reflect the status of the hearing at one point on the basilar membrane. The behavioral threshold, on the other hand, is more directly related to given points along the basilar membrane, but reflects the combined status of outer and inner hair cells. Thus the combination of DPOAE measurements and hearing thresholds has the potential to provide better basis for hearing diagnosis. In the present study, both DPOAE measurements and hearing thresholds are determined with a fine frequency resolution. The results are compared and similarities and differences discussed.
Centre for Applied Hearing Research, Department of Electrical Engineering, Technical University of Denmark.
ABSTRACT
The aim of this study was to accurately simulate auditory evoked potentials (AEPs) from various classical stimuli such as clicks, chirps and tones, often used in research and clinical diagnostics. In an approach similar to Dau (2003), a model was developed for the generation of auditory brainstem responses (ABR) to transient sounds and frequency following responses (FFR) to tones. The model includes important cochlear processing stages (Zilany and Bruce, 2006) such as basilar-membrane tuning and compression, inner hair-cell (IHC) transduction, and IHC auditory-nerve (AN) synapse adaptation. To generate AEPs recorded at remote locations, a convolution was made on an elementary unit waveform (obtained empirically) with the instantaneous discharge rate function for the corresponding AN unit. AEPs to click-trains as well as to tone pulses at various frequencies were both modelled and recorded at different stimulation levels and repetition rates. The observed nonlinearities in the recorded potential patterns with respect to ABR wave latencies and amplitudes could be largely accounted for by level-dependent BM processing as well as effects of short-term neural adaptation. The present study provides further evidence for the importance of cochlear tuning and AN adaptation on AEP patterns and provides a useful basis for the study of more complex stimuli including speech.
(1) Leibniz Institute for Neurobiology, Magdeburg, Germany (2) School of Psychology and Psychiatry, Monash University, Melbourne, Australia (3) Bionic Ear Institurte, Melbourne, Australia
ABSTRACT
The seminal study of Sachs and Abbas modelled the discharge rate vs. sound level functions of auditory-nerve fibers (ANFs) as the result of the interaction of a mechanical stage', describing basilar membrane displacement as a function of sound amplitude, with a transducer stage', converting displacement into ANF discharge rate. The latter stage was modelled as a saturating power law with a power of 1.77. Spontaneous rate (SR) was assumed to simply add to the sound-driven rate. Later investigators proposed an integer power of 2 or powers varying between about 1 and 3, depending on SR. Apart from the added complexity required to explain different powers, the suggested values are difficult to reconcile with a related psychoacoustic phenomenon, the dependence of detection thresholds in quiet on sound duration. It was shown that such thresholds can be understood as resulting from probability summation of detection events whose rate is proportional to the 3rd (but no lesser) power of sound amplitude. Support for a power of 3 was also derived from the dependence of ANF first-spike timing on sound amplitude and rise time.
Here, we present a physiologically plausible modification of the transducer stage'. Spontaneous activity and its correlation with ANF sensitivity are emergent properties of the model. We show that for frequencies well below CF, where the mechanics are linear, the power which optimally accounts for all measured cat ANF rate-level functions is 3, independent of SR or characteristic frequency (CF). Furthermore, and remarkably, the mathematical equation derived here for the transducer stage' is formally equivalent to the Adair- and Hill-equations in chemistry for the highly cooperative binding of 3 ligand molecules to a macromolecule. Since the C2A domain of the ubiquitous Ca2+-sensors involved in fast exocytosis (Synaptotagmins I and II) binds exactly 3 Ca2+-ions in a highly cooperative fashion, we speculate that the shapes of the rate-level functions largely reflect the variation in the proportion of saturated Ca2+-sensor molecules with the concentration of Ca2+, which in turn depends on the sound amplitude. Our analyses suggest that the transducer stage' operates with power 3 in all inner hair cells, while its sensitivity differs among synapses, e.g. due to differences in Ca2+ concentration for the same stimulus. Finally, our model unites absolute thresholds at the perceptual level with the shapes of ANF rate-level functions.
(1) The Bionic Ear Institute, East Melbourne, Victoria, Australia. (2) The University of Melbourne, Victoria, Australia.
ABSTRACT
Enjoyment of music is an important part of life that may be degraded for people with hearing impairments, especially those using cochlear implants. The ability to follow separate lines of melody is an important factor in music appreciation. This ability relies on effective auditory streaming, which is much reduced in people with hearing impairment, contributing to difficulties in music appreciation. The aim of this study was to assess whether visual cues could reduce the difficulty of segregating a melody from background notes for people with normal hearing and extensive musical training, 2] people with normal hearing and no musical training, and 3] musically untrained cochlear implant users. Normal-hearing musicians (N=18), normal-hearing non-musicians (N=19), and cochlear implant (CI) users (N=11) were asked to rate the difficulty of segregating a four-note repeating melody from interleaved random distracter notes. The pitch of the background notes was gradually varied throughout blocks, providing a range of difficulty from easy (a large pitch separation between melody and distracter) to impossible (melody and distracter completely overlapping). Visual cues were provided on half the blocks; average difficulty ratings for blocks with and without visual cues were compared between groups.
When no visual cues were present, musicians rated the task as less difficult than non-musicians, with CI users reporting the most difficulty. For normal-hearing listeners, visual cues and musical training both reduced the difficulty of extracting the melody from the distracter notes. However, musical training was not required for the visual cue to be effective, with musically untrained listeners showing the largest reduction in difficulty. CI users also reported significantly reduced difficulty extracting the melody when using the visual cue, reporting similar difficulty ratings to normal-hearing listeners without the aid of the visual cue. These results are consistent with theories suggesting an important role for central (top-down) processes in auditory streaming mechanisms, and suggest that visual cues may be an effective aid in assisting CI users to enjoy music. No special training was required for normal-hearing listeners and CI users in order for the visual cue to reduce the difficulty of extracting the melody. Further research is required to optimise the design of the display and to determine the most useful acoustic features for the display to encode.
National Institute of Information and Communications Technology (NICT), Kyoto, Japan
ABSTRACT
A major impediment to effective personalization of head-related transfer functions (HRTFs) for spatial audio, is our incomplete knowledge on the acoustic effects of the folds and cavities of the human pinna, which has a unique geometry for each individual. For HRTF personalization and many related applications, we are in need of a more complete account of the relationships between the complex pinna geometry and perceptually important acoustic features such as the HRTF peaks and notches. Toward this goal, we show extended results of computer simulations designed to reveal acoustic sensitivities of the DB60 right pinna of the well-known KEMAR manikin. In an earlier study we verified the accuracy of KEMAR's right-ear HRTFs simulated with the Finite Difference Time Domain (FDTD) method, by showing how well they matched with independent acoustic measurements. In the present study we use FDTD simulation as a tool to investigate how KEMAR's pinna-related transfer functions (PRTFs) vary in response to small, localized perturbations of the pinna surface geometry.
In particular, we started with the original shape of the DB60 right pinna and an adjacent 6.2 x 8.4 cm patch of KEMAR's head. A volume enclosing this original shape was voxelated on a uniform 3D grid of resolution 2 mm, and FDTD simulation was used to obtain a baseline set of PRTFs at 45 spatial locations covering a wide range of azimuth and elevation angles, a distance 1 m from the ear. A total of 1784 unique "micro"-perturbations were then effected by adding a single voxel at a time along the entire outer surface of the pinna and adjoining head-side. For each perturbation, a full set of PRTFs were obtained by simulation and compared with the original PRTFs, to precisely quantify the resulting shifts in the centre-frequencies of 234 acoustic features (maximally 4 peaks and 3 notches per PRTF, across 45 spatial locations) that appeared up to 14 kHz. This large amount of data allowed the creation of detailed 3D maps of the pinna showing patterns of frequency sensitivity for every peak and notch. These sensitivity maps reveal the anatomical parts of the pinna most strongly affiliated with each acoustic feature. Furthermore, they provide a visual representation of the physical modes of resonances (peaks) and anti-resonances (notches), and thus give clues regarding the acoustic features' generative mechanisms.
(1) NICT (2) Chiba Institute of Technology, Narashinoshi, Japan
ABSTRACT
Spectral peaks and notches observed in head-related transfer functions (HRTFs) provide cues for localization of sound sources in the median plane. These peaks and notches are caused by complicated pinna cavities. In order to examine how these peaks and notches are generated, the pinna was simply modeled as a rectangular thin plate with a vertically long rectangular hole, which was modified to obtain the typical pattern of human HRTFs. The finite-difference time-domain method was used for calculating HRTFs of the modeled pinna and for visualizing pressure distribution patterns at peaks and notches. The results indicated that the first peak was caused by the quarter wavelength resonance in the depth direction of the hole. The peak amplitude was almost constant across elevation angles of the source. The second and third peaks, however, were derived from resonances occurring along the vertical direction of the hole. These two resonances were remarkably similar to the first and second order resonances of a closed-closed tube, and thus pressure anti-nodes developed at the upper and lower ends of the hole. As the entrance of the ear canal was located near the lower end, these resonances were observed as the second and third peaks of HRTFs. The amplitudes of these two peaks showed strong directivity, with considerable variation as a function of elevation angle. The amplitudes reached maximum values when the sound source was placed in the top direction, that is, when sound waves arrived along the longitudinal direction of the hole. In addition, the vertical resonances of the hole were associated with a spectral notch, which was observed as the first notch of HRTFs. The notch frequency changed with the elevation angle, while the peak frequencies were constant. The frequency of the first notch was lower than that of the second peak when the sound sources were below the horizontal plane, but it increased as the elevation angle approached the top direction. The shape of the hole above the entrance of the ear canal affected the frequency and amplitude of the second and third peaks and the first notch.
The Acoustic Group, Lilyfield, NSW, Australia.
ABSTRACT
The author suffered a serious injuries including brain injury as a result of a fall. Part of the recovery treatment has involved sound therapy and neuro-feedback - with amazing results. There are different types of sound therapy and different processes that make significant claims. Not often is such work undertaken by an acoustic engineer who can progressively undertake hearing measurements to ascertain the resultant effect with the co-operation of the psychologist undertaking such work. Restoration of hearing due to Sound Therapy is examined and discussed to ascertain if the claims are fact or fiction, or is the terminology misused by those who do not understand what is being achieved?
(1) Basaveshwar Engineering College, Bagalkot, India (2) V.J.T.I, Mumbai, India (3, 4) Indian Institute of Technology Bombay, Mumbai, India
ABSTRACT
Sensorineural loss is characterized by increased hearing threshold, reduction in the dynamic range of hearing and re-cruitment, and increased temporal and spectral masking, resulting in degraded speech perception. Several techniques including spectral contrast enhancement, multi-band frequency compression, and dichotic binaural presentation have been investigated for reducing the adverse effects of increased masking. Assessment of speech processing techniques and optimization of processing parameters involves listening tests on hearing-impaired listeners. These tests are time consuming and may cause a fatigue, particularly in elderly subjects. A simulation of hearing loss, by processing the speech signal through a model of the loss characteristics, is useful in conducting the listening tests on normal-hearing subjects, for a preliminary evaluation of the schemes and particularly for selecting the processing parameters. The present study used addition of broad-band noise, band-limited to speech frequency range, at a specific SNR with re-spect to short-time (10 ms) energy of the signal. Different levels of loss were simulated by varying the SNR. In this simulation, no noise gets added during silence segments. Listening tests to assess the loss simulation were conducted using three types of test material: vowel-consonant-vowel (VCV) utterances with vowel /a/ and twelve consonants, phonetically balanced (PB) word lists, and modified rhyme test (MRT). Recognition score from subject responses was used as a measure of speech intelligibility and response time was used as a measure of load on the perception process. For all the three test materials, decrease in the recognition scores and increase in response times for normal- hearing subjects showed the same pattern as the corresponding results for subjects with moderate-to-severe sen-sorineural loss. A relative information transmission analysis of the stimulus-response confusion matrices for VCV ut-terances showed that the simulated loss did not affect reception of voicing and nasality features and it had maximum adverse effect on the reception of place and duration features, indicating that the addition of broadband noise with constant SNR with respect to short-time signal energy simulated an increased spectral and temporal masking.
(1) The University of Manchester, UK (2) Rotman Research Institute, Toronto, Canada (3) Cambridge University, UK
ABSTRACT
A dead region (DR) is a region of the cochlea with no functioning inner hair cells and/or neurones. A DR can be detected by measuring; (1) masked thresholds in threshold equalizing noise (TEN) and (2) psychophysical tuning curves (PTCs). Both methods require behavioural responses from the patient. An early diagnosis of DRs is very important for hearing-aid fitting and for assessing a child for a cochlear implant. This study was intended as a first stage in developing an objective method of diagnosing DRs that would eliminate the requirement of behavioural responses from the patient, thus making it useful for detecting DRs in young children. The auditory steady-state response (ASSR) provides the basis for such objective test. The ASSR is an evoked potential that closely follows the time course of the stimulus modulation; the evoked-response is specific to the frequency of the carrier. Using normally hearing and hearing-impaired adults we investigated: (1) the effect of TEN on the amplitude of ASSRs; (2) the effects that using notched-TEN stimuli had on the accuracy of TEN-test results (3) the possibility of measuring electrophysiological tuning curves (ETCs) using amplitude-modulated frequency-swept carrier frequencies. The results show that in normally hearing adults: (1) TEN has a greater masking effect on the threshold measured electrophysiologically than on the threshold measured behaviourally; (2) the introduction of the narrow-band notches does not significantly reduce the masking effect of the noise on the amplitude of ASSRs; (3) it is possible to obtain reliable ETCs using ASSRs. The results for hearing impaired adults showed large variability. Thus more research is needed on masking of ASSRs and ETCs in hearing impaired people.
(1) The Bionic Ear Institute, East Melbourne, Victoria, Australia (2) The University of Melbourne, Victoria, Australia
ABSTRACT
Music is often composed of different melodic lines that are played together, either on the same or different instruments. These melodic lines, or streams', are often defined or separated by a number of perceptual parameters, such as pitch, timbre or loudness. One important aspect of listening to music is to be able to hear these melodic lines separately and in comparison to each other. Hearing impairment, particularly using a cochlear implant, reduces the perceptual differences between auditory sources, thereby reducing auditory stream segregation and affecting the ability to enjoy music. Cochlear implant users are known to have poor perception of pitch and timbre but relatively good perception of time-based sound features, such as rhythm. Musicians, on the other hand, have extensive training in auditory streaming and in using subtle acoustic cues to separate sound sources. The aim of this study was to examine the effect of four acoustic parameters on the difficulty of extracting a simple 4-note melody from a background of distracter notes. Melody extraction difficulty ratings were recorded while four acoustic parameters of the distracter notes were varied separately: fundamental frequency (F0), intensity, temporal envelope and spectral envelope. The average difficulty ratings for listeners with normal hearing and no musical training (N=19) were compared with two other groups - musicians with normal hearing (N=18) and cochlear implant (CI) users (N=11).
The average difficulty ratings for musicians were lower than for non-musicians for all four parameters, reflecting the effect of training on auditory streaming. For CI users, difficulty ratings were higher when the distracter notes varied in F0 and the spectral envelope. These results reflect the difficulty that CI users have in pitch and timbre discriminations. However, CI users reported difficulty ratings within the range of non-musically trained listeners when the distracter notes varied in intensity and temporal envelope. These results likely reflect the operation of the CI sound processor, which presents gross spectral and temporal envelope cues well, but does not resolve individual harmonics of the fundamental frequency (F0) or fine timing cues. The results have implications for the design of new CI sound processors that will enhance music appreciation through the artificial enhancement of specific acoustic cues.
Key Laboratory of Machine Perception (Ministry of Education), Department of Intelligence Science, Peking University, Beijing, P.R.China
ABSTRACT
The impoverished auditory information provided by cochlear implant (CI) results in impaired auditory perception and production in mnay CI users. Speech perception in CI users has been well studied and fully documented. In contrast, much less has been studied for speech production in CI users. Tactile stimulation has been shown to enhance CI performance. In the present study, we aim to dis-cover to what extent that tactile-enhanced speech perception can modulate speech production in CI users. Twelve Mandarin-Chinese speaking children with unilateral cochlear implant were tested under two conditions: vocalizing Mandarin syllables after listening to the audio playback of the same syllables with or withoutfeeling tactile stimulation of the same syllables applied to fingertip. Objective and subjective evaluations were conducted on the recorded Mandarin syllables produced by the subjects. The clarification rate (CLR) and correction rate (CRR) of subject's tone production evaluated by normal hearing subjects were analized. The perception index (Pt) that indexes the role of tone perception to tone production was calculated. The results show that subjects' tone production is clearer and more recognizable with the combined tactile and CI stimulation than CI stimulation alone, and the role of tone perception on tone production is higher in CI+Tactile condition than CI only condition. Our results provide important clues for designing efficient speech rehabilitation training program for CI users.
Key Laboratory of Machine Perception (Ministry of Education), Department of Intelligence Science, Peking University, Beijing, P.R.China
ABSTRACT
Cochlear implant (CI) successfully restores speech communication in profoundly deaf people. Speech perception by CI users can reach as high as close to 100% in quiet condition. However, CI users perform poorly in pitch related tasks such as music perception. Studies have shown the lack of explicit coding of low frequency information (< 500 Hz) by CI processor contributes to the impaired perception of pitch. It remains unknown how the speech production relates to the impaired pitch perception in CI users. In this study, we tested the perception and vocal production of low-frequency musical notes in 14 CI children. Musical notes used in the study are C3 (262 Hz), D3 (294 Hz), E3 (330 Hz), F3 (349 Hz), G3 (392 Hz), A3 (440 Hz), and B3 (494 Hz). In perception task, subjects were asked to discriminate whether a pair of notes are the same or different. In vocal production task, subjects vocalized the musical tones after listening to the same notes vocalized by a female and their vocal production was recorded. Objective and subjective evaluations of subject's vocal production were carried out. Correct response of note discrimination, pitch contour perception, and extracted F0 frequency range of sound samples were analyzed. Results show that CI Children can discriminate musical notes when there is 2 or more semitones between the notes, with correct responses from 67% to 100% when frequency difference increasing from 4 to 9 semitones. However, All CI users had a significant loss in the production of the same musical notes, with average correct response by subjective evaluation at chance level for 6 out 9 subjects. F0 of subjects' vocal production of notes was significantly lower than that of the original sounds they heard, with flat frequency variation in notes from C3 to B3. Overall, the subjective evaluation of subjects' vocal production of musical notes showed a poor pitch contour recognition. Findings from the present study showed that the vocal production of low-frequency notes by CI users does not reflect the accuracy of their perception, suggesting that the distorted auditory feedback provided by CI disrupted their vocal production.
(1) Tokyo Denki University, Japan (2) Tokyo Denki University (Nohmi Bosai Ltd.), Japan (3) The University of Tokyo, Japan
ABSTRACT
Warning sound is a very important sound that enables people to survive in emergency situation. Such warning sound should not only be heard clearly, but also should attract people's attention instantly in emergent circumstance. In the present research, effects of warning sound on urgent impression is investigated from the viewpoint of auditory cognition. Four experiments are carried out to reveal the effects of frequency, loudness, and ringing cycle of warning sound. Results show that louder warning sound makes more urgent impression, and that warning sound that has frequency of 125 Hz and 4,000 Hz is able to cause more urgent impression. Furthermore, warning sound that has ringing cycle of 0.25 s and 0.125 s invokes larger sense of urgency than cycle of 1 s. The ringing cycle of 1 s that is defined in the present research is equivalent to the standard ringing cycle of warning sound that is required in ISO 8201. And almost all of the measured warning sounds in the city of Tokyo and Taipei have frequency of 1,000 or 2,000 Hz. As a result, it is suggested that more shorter ringing cycle makes warning sound more urgent, and that lower or higher frequency of warning sound causes larger sense of urgency.
(1) Dongshin University, Naju, Korea (2) Chonnam National University, Gwangju, Korea
ABSTRACT
Evacuatees need to understand the evacuation and escape information correctly and perceive the direction of evacuation if they are to be evacuated toward the right direction positively through the sound information in the event that unforeseen disaster, such as unexpected fire, occurs suddenly. This study examines the evacuation inducing sound that enables the swift and accurate transmission of the evacuation information to the evacuatees in an indoor space in an attempt to use the voice information as the evacuation inducing sound. Specifically, this study reviews the specific way to express the voice information, which is required for the refuge inducing system, and researches in relation to the way to figure out the contents of information easily by changing the tempo of sound information and sound characteristics and sounding the voice information amid the background noise.
(1) Centre for Applied Hearing Research, Department of Electrical Engineering, Technical University of Denmark, Lyngby, Denmark (2) Starkey Hearing Research Center, Berkeley, CA, USA
ABSTRACT
Frequency selectivity in the human auditory system is often measured using simultaneous masking of tones presented in notched noise. Based on such masking data, the equivalent rectangular bandwidth (ERB) of the auditory filters can be derived by applying the power spectrum model of masking and assuming a rounded-exponential filter shape. If a forward masking paradigm is used instead of simultaneous masking, filter estimates typically show significantly sharper tuning (by a factor of about 1.4). This difference in frequency selectivity has commonly been related to spectral suppression mechanisms observed in the cochlea. Considering bandwidth estimates from previous studies based on forward masking, only average data across a number of subjects have been considered. The present study is concerned with bandwidth estimates in simultaneous and forward masking in individual normal-hearing subjects. In order to investigate the reliability of the individual estimates, a statistical resampling method is applied. It is demonstrated that a rather large set of experimental data is required to reliably estimate auditory filter bandwidth, particularly in the case of simultaneous masking. The poor overall reliability of the filter estimates was found to be mainly related to the very short tone duration (i.e., 10 ms) that was chosen. Applying 300-ms long tones in simultaneous masking drastically improved the reliability of the filter estimates. The tone duration in forward masking had to be very short to elicit a sufficient amount of masking. Based on extensive data for three subjects, the difference between forward and simultaneous masking estimates of auditory filter bandwidth was found to be even larger than previously reported, with a bandwidth decrease by a factor of about 1.8. The results of the study can be used to optimize the measures of frequency selectivity which is particularly useful when studying consequences of (individual) hearing impairment.
South China University Of Technology, Guangzhou, Guangdong, P.R.China
ABSTRACT
Adaptive methods are usually used in the current psychoacoustical experiments. There're roughly two types of adaptive methods--parametric and nonparametric method respectively, and the latter is widely applied in the simple estimations of threshold for its intuition and convenient operation. The present work develops a simulation programme and makes experimental verifications for several nonparametric methods, such as Transformed Up-Down, PEST, stochastic approximation & accelerated stochastic approximation (SA&ASA).
Simulation programme defaults a type of psychological scaling function (PF), and makes use of Monte-Carlo method to simulate the processes of the different psychoacoustical methods. The statistical results of each 1000 simulations indicate that, under the same number of trials and PF (stepsizes and target probabilities are different slightly as the methods changed), ASA has the greatest probability to gain the values in the range of 1 dB greater or smaller than the real threshold, which means its convergence is the best of all. Furthermore, in the real experiments, the results show that the variance of the data of ASA is the smallest, and its repeatability and stability is the best of all. Finally, it can be concluded that ASA is with the best effect in the nonparametric methods. However, when using ASA, it's necessary for the experimenters to consider the stepsize and target probability depending on the specific conditions of different experiments.
(1) CNRS-Laboratoire de Mécanique et d'Acoustique, Marseille, France (2) SNCF, Paris, France (3) Technocentre Renault, Guyancourt, France
ABSTRACT
Several authors have demonstrated that detection can be aided by the presence of signal energy in many auditory channels. The subjects were likely to adopt a broadband listening strategy. The results can be reasonably well understood in terms of the multiband energy detector model. The overall sensitivity to a multitonal complex (dn') is assumed to be equal to the square root of the sum of the squares of the individual values of di'.
Consequently, the threshold for a n-component signal (expressed in dB/component) should be linearly related to the square root of the number of signal components . Whereas this rule is well established for equally detectable components, it seems to fail for equally intense components (Buus and Grose, JASA 2008). In this study, the detection of multicomponent signals that are not equally detectable was investigated precisely as a function of the level difference between components. In the first condition, detection thresholds were determined for seven-tone complex signals (80, 160, 320, 640, 1280, 2560 and 5120 Hz), all equally detectable, with random starting phases masked by white noise. In a second condition, variation of the level relation between the components was examined: one or three of seven frequencies were increased by 5, 10, 15, 20 and 25 dB. In a third condition, thresholds were measured for harmonic complex tone consisting of 200 components between 100 and 20000 Hz in white noise. This situation is closer to ecologically significant signals typically more complex: one typical example is the voltage transformer noise in high-speed train coaches. Finally, we investigated the influence of masker type. The masker was a broadband noise with a set of harmonic partials, similar to an interior car sound. We examined the relative effectiveness of broadband noise masker with harmonics compared to white noise. Determining the audibility of spectrally complex signals that are not equally-detectable in a complex broadband noise masker, with harmonics or not, is yet an unanswered question and an industry expectation. In this contribution, we propose perceptual criterion to help signal representations.
(1) AG Technische Akustik, MMK, Technische Universität München, Germany (2) accSone, München, Germany
ABSTRACT
The term "remote psychoacoustic experiments" describes the situation that - in contrast to traditional psychoacoustic experiments - the experimenter and the subject are located in different places, like different offices, buildings, cities, countries, continents. In essence, the psychoacoustic experiment is performed via internet. In particular, calibration problems have to be solved that the (remote) subject is presented the sounds at appropriate level, at least approximately. For the example of audio-visual interactions the setup and calibration procedure are described in detail. Results from remote psychoacoustic experiments concerning perceived loudness differences of trains in different colour, but at same SPL are compared to data for traditional psychoacoustic experiments with the same stimuli.
Kanazawa Institute of Technology, Japan
ABSTRACT
In this paper, we investigated the precedence effect in combination of four signals and three background sounds with three level differences between the signal and background sound. In subjective tests, four signals were male voice, female voice, siren tone and sweep tone. Three background sounds were recorded in a mall, a hotel and a corridor of school. The durations of signals were 3-8 seconds. The background sound started two seconds before the head of the signal and stopped two seconds after the tail of the signal. The level differences between the signal and background sound were -6, 0 and +6 dB. The test signals were created on the assumption that a direct sound and the first reflective sound surrounded non-directional background sound. The direct sound and the first reflective sound came from loudspeakers that were set at the 45 degrees point left and right from the median plane of a listener, respectively. Not only direct sound from the loudspeaker to the listener but also crosstalk-sound from the opposite loudspeaker to the listener were calculated using HRTFs (Head related transfer function). The distance between the loudspeakers and the listener was 1.4 m due to measurement condition of the HRTF. The delay time of the first reflective sound were changed from 0 to 80 ms in twelve steps. Non-directional background sound was designed that background sound was radiated from 36 loudspeakers arranged at equal intervals on the circumference of 1.4 m in radius and was calculated using the HRTFs.
One listening session consisted of five trials in each delay time. The kinds of delay times were twelve. The subject randomly listened to a test signal 60 times in each session. There were three sessions for each condition. Two males in age of twenties took part in the tests. The subjective test showed that the level difference between the signal and background sound and the combination of signal and background sound contributed to the direction of sound localization when the precedence effect occurred. Moreover, the combination of signal and background sound influenced the stability of sound localization. The female and male voices showed the almost same tendency, however, non-voice signals, that is, siren tone and sweep tone showed the different tendency from the voices. It was confirmed that when the precedence effect occurred in background sound, the complicated structure of signal had an effect on the degree of the occurrence.
(1) Graduate School of Information Environment, Tokyo Denki University, Japan (2) School of Information Environment, Tokyo Denki University, Japan
ABSTRACT
This study examines the relationship between individual differences of sound-localization ability and anthropometric parameters of the pinna shape. We analyze the relationship between inter-subject differences of localization ability and pinna shape by the sound localization test on the median plane. We conducted a localization test for ten subjects by using sound sources in the frontal quadrant of median plane on the upper hemisphere. The result shows there are inter-subject differences for the localization ability and two subjects are hardly responded to the sound source. The degree of response error was quantified by calculation of root-mean-square (RMS) between a presented source angle and a perceived one. Ten pinna shape parameters were measured from seventy-two subjects and ten of them were the same subjects as used for the localization test. Principal component analysis was applied for the left pinna shape parameters by using Martin's method. Three components are extracted and the first principal component was considered to be a ear length and a prominent from the head. Three components were compared with the value of perception error by the ten subjects. There was a negative significant correlation (r=-0.69, P<0.05) between the value of response error and the first principal component score. Therefore results of the analysis suggest that individual differences of localization ability were primarily related to the first principal component of the pinna shape.
A-Volute, Villeneuve d'Ascq, France
ABSTRACT
The cancellation of transaural acoustic crosstalk is an essential and critical feature of all virtual auditory displays based on HRTF (Head-Related Transfer Functions) when loudspeakers are used for listening. Generally, a satisfying crosstalk cancellation is achieved within an extremely small sweet spot. Reducing the angle between the loudspeakers with respect to the listener increases the controlled area in a high frequency range, especially on the front-back axis, but makes harder the cancellation and equalization at lower frequencies, the amount of energy required to achieve the cancellation in this frequency range being prohibitively high. The directivity of employed loudspeakers has a direct impact on the transaural acoustic crosstalk. The narrower the directivity is, the lower the crosstalk level should be. An alternative to the use of a signal processing step to cancel the crosstalk would be to use highly directional loudspeakers to physically reduce the levels of indirect path responses, i.e. from each loudspeaker to the corresponding contralateral ear, compared to those of direct ones, i.e. from each loudspeaker to the corresponding ipsilateral ear. Devices known as parametric arrays employ the nonlinearity of the air to create audible sound from inaudible ultrasound, resulting in an extremely directive, beamlike wide-band acoustical source. This paper investigates the potential use of a pair of parametric arrays for HRTF-based transaural applications.
(1) Graduate school of Engineering, Utsunomiya University, Japan (2) Faculty of Engineering, Osaka Institute of Technology, Japan
ABSTRACT
This study focused on simultaneity perception characteristics of an auditory-visual stimulus. Experiments were carried out for evaluating the simultaneity between an auditory and visual stimulus when preceding stimuli were presented. As the test stimuli, we used a pure tone (1000 Hz and 80 dB SPL) and a white light (LED). Both stimuli had a duration of 10 ms. The preceding stimuli, which were the same as the test stimuli, were presented successively 20 times at an interval of 50 ms, followed by the test stimuli at an interval of 200 ms. There were four kinds of presentation pattern of the preceding stimuli as follows; only the sound stimuli were presented (sound attention test), only the light stimuli were presented (light attention test), both the sound and light stimuli were presented synchronously (sound-light synchronous attention test), and the sound or light stimuli were presented randomly (sound-light random attention test). The sound and light as the test stimuli had a stimulus onset asynchrony (SOA) from -160 to 160 ms, where a negative value indicates the sound was presented first. We presented the test stimuli to the experimental subjects in each condition and asked to answer which of the sound or light was perceived first. Then we evaluated the point of subjective simultaneity (PSS) of the test stimuli. As a result, the PSS shifted toward the sound precedence by 2 ms at the sound attention test, 10 ms at the light attention test, 9 ms at the sound-light synchronous attention test, and 15 ms at the sound-light random attention test.
Acoustics, Department of Electronic Systems, Aalborg University, Denmark
ABSTRACT
Because amplitude- and frequency-modulated sounds can be the basis for the synthesis of many complex sounds, they can be good candidates in the design of training systems aiming at improving the acquisition of perceptual skills that can benefit from information provided via the auditory channel. One of the key issues when designing such training systems is in the assessment of generalization of learning. In this study we present data on the learning of an auditory task involving sinusoidal amplitude- and frequency-modulation tones. Modulation rate discrimination thresholds were measured during pre-test, training, and post-test phases. During training, listeners were divided into two groups; one group trained on amplitude-modulation rate discrimination and the other group trained on frequency-modulation rate discrimination. Results showed certain degree of specificity for the trained conditions, differences in learning rate, and generalization across modulation type.
(1) Faculty of Engineering, University of Yamanashi, Japan (2) Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, Japan
ABSTRACT
Although the sense of presence is a key for evaluating AV (audio-visual) equipments, the meaning of the sense is still vague. To clarify the structure of the sense, we conducted an experiment on AV contents. Firstly, 347 adjectives that might be used for expressing the sense of presence were collected by interviews, a dictionary, and magazines. The number of the words was reduced to 29 pairs of adjectives by the KJ (Kawakita Jiro) method. Thirty-three daily scenes (e.g., a scene with a passing train) were recorded with a high-definition video camera while the sounds were recorded using a dummy head. Each of the AV contents was reproduced with a 65-inch display and headphones, and evaluated by the SD (Semantic differential) method using a five-point category scale for the 29 pairs of adjectives. Sixteen subjects in their twenties participated in the experiment. The experimental data were analyzed by the method of factor analysis. Seven factors were extracted where the accumulated contribution ratio was 82.5%. The first factor, F1, is regarded to present "activity" because the pairs such as "quiet-noisy" and "persistent-plain" have the largest loadings on this factor. Similarly, F2 through F7 are regarded to represent "naturalness," "dailiness," "psychological loading," "potency," "entertainment," and "decorativeness," respectively.
(1) Research Institute of Electrical Communication, Tohoku University, Sendai, Japan (2) Faculty of Engineering, Shinshu University, Sendai, Japan
ABSTRACT
In the near future, we might communicate with a person in a remote place using a system with a high sense of presence. For such communication, it is important to capture and transfer comprehensive sound space information of the remote place to the local site. Toshima et al. developed a robot dummy head--the tele-head--which can move in synchronous to a listener's various head movements. This tele-head appears promising to enable us to sense the whole remote sound space interactively as an avatar of a listener. We developed a simplified tele-head whose head moves synchronously, following the listener's horizontal head rotation. Because the tele-head used in the present study does not follow the listener's head rotation in yaw and pitch, the reason that the synchronized movement of the tele-head only in rotation would be effective to render the transferred sound space is an interesting and important issue that demands examination. The authors investigated sound localization accuracy in such a situation. The tele-head was located in the center of a spherical loudspeaker array comprising 70 loudspeakers installed in an anechoic room. The listener sat on a chair in a remote soundproof room. The sound space in the anechoic room was captured and transferred to the listener with the tele-head. The tele-head was constructed with a 3D printer based on stereolithography so that the head shape figure was accurately identical to the listener's for each listener. A sound localization test was performed in a median plane. Results show that the sound localization accuracy is improved when the tele-head rotates in synchronous movement with the listener's head rotation, which suggests that captured sound space should be reproduced at least in a way responsive to the listener's head rotation in local site. Another sound localization test for the median plane localization was conducted to examine effects of manipulating the rotation angle. In this test, the tele-head rotation was controlled so that the ratio between the listener's actual movement and that of the tele-head was varied from 0.5 to 1.5. Results show that the ratios do not affect the accuracy of sound localization to a statistically significant degree, suggesting large robustness in using cues provided by head rotation. Consequently, the sound localization accuracy can be improved when the head of an avatar in a remote site rotates synchronously with certain positive correlation to the listener's head rotation.
Air Force Research Laboratory, Walter Reed Army Medical Research Center, USA
ABSTRACT
Darwin et al (2006) have shown that listeners can effectively utilize differences in fundamental frequency between a target and masker phrase to improve performance in a two-talker segregation task. Very little is known about an F0- based segregation strategy with more than two talkers, especially when the talkers are heard sequentially rather than simultaneously. In the current experiment, intelligibility was measured for a five-word target phrase, when each target word was interleaved with two masker words. The relative pitch difference of the target and masker words was varied from 0 - 12 semitones in 2-semitones steps. Two listening conditions were tested: 1) Target Mid: The maskers were presented one each at the lowest and highest pitch and the target pitch was systematically varied between the two masker pitches; and 2) Masker Mid: The target and one masker were presented at the lowest and highest pitch and the pitch of the second masker was systematically varied between the two. Overall, intelligibility was always worst in cases when the target had the same pitch as one of the maskers. When the target pitch was in between the two maskers (Target Mid condition), an inverted U-shaped function was obtained; performance was best when the target pitch was exactly midway between the two masker pitches and decreased systematically as it moved towards the high-pitch or the low-pitch masker. In the Masker Mid condition, performance depended on the pitch of the target; for a high-pitch target, a large gain in intelligibility was obtained by placing one masking voice at a 2-semitone separation from the target with no additional benefits obtained by moving the masker farther away in pitch. In contrast, when the target was the low-pitch voice, intelligibility continued to improve as the pitch of one masker was moved closer to the high-pitch masker voice; indeed, best performance was obtained when both maskers had the same pitch and were maximally separated from the target. These results suggest that F0-based segregation strategies depend not only on the relative F0 differences but also on the absolute pitch of the voices to be segregated.
(1) Section of Acoustics, Department of Electronic Systems, Aalborg University, Denmark (2) Department of Experimental Psychology, University of Cambridge, UK
ABSTRACT
Noise with energy in the low-frequency range (i.e. below 200 Hz) is known to produce problems with annoyance and represents an environmental problem (Leventhall, 2004). Attempts to understand and predict problems produced by low-frequency noise require information about human frequency selectivity in the low-frequency range. However, there are few data on frequency selectivity for centre frequencies below 100 Hz. To estimate the characteristics of auditory tuning for very low frequencies, in this study psychophysical tuning curves (PTCs) were obtained for tonal and narrow-band noise maskers at centre frequencies (CFs) of 31.5, 40, 50, 63, and 80 Hz. For the tonal maskers, pairs of tones designed to produce modulation-detection-interference (MDI) were added to the masker, as a way to evaluate and reduce the effects of beats. For each subject, an equal-loudness-level contour was also obtained using closely spaced frequencies. This was used as a rough estimate of the shape of the individual middle-ear transfer function (METF) in the frequency range below 100 Hz. Preliminary results obtained using 9 subjects are described. Sharp tips were observed for some of the PTCs derived with the tonal maskers, probably reflecting the influence of beats. Addition of the MDI tones produced more regular and broad tips for cases where beating was evident. The PTCs obtained with the noise maskers were generally more regular around their tips. For both masker types, the overall shapes of the PTCs were broad, indicating that frequency selectivity at very low frequencies is relatively poor. Also, the PTCs were generally asymmetrical, with steeper lower skirts than upper skirts, an effect that became more pronounced as the CF was decreased. For the CFs of 31.5 and 40 Hz, the tips of the tuning curve did not occur at the CF, but at a higher frequency. The overall shapes of the tuning curves, and the degree to which tuning was affected at the lowest CFs, appear to be influenced by the shape of the (estimated) METF.
National Institute of Information and Communications Technology (NICT), Kyoto, Japan
ABSTRACT
Telecommunications over state of the art audio visual equipment potentially requires information on where the speaker is facing along with the sound track, especially when using large scale three dimensional pictures, due to the apparent closeness between the speaker and listener. Acoustic information at the listener's ears indeed changes with the speaker's facing direction (even when the speaker's position is unchanged) and may seem unnatural if it is incongruent with the visual information. Towards a better understanding of both perceptual and acoustical effects of such information, this paper shows the results of an empirical study designed to measure a listener's ability to identify the facing direction of a human speaker and to estimate the acoustic cues they use. A listener's performance was assessed in an anechoic chamber. A male speaker sat on a pivot chair and spoke a short sentence while facing a direction that was randomly chosen from eight azimuthal angles or three elevation angles. Twelve blindfolded listeners heard the spoken sentence at a distance of either 1.2 or 2.4 m from the speaker and were asked to indicate the speaker's facing direction. In separate sessions, the speaker continuously changed facing angles while speaking and the listeners indicated the perceived direction of horizontal rotation (clockwise or counter-clockwise) or vertical rotation (upward or downward). The overall results showed that the listener's average response errors were 23.5 degrees for azimuth and 12.9 degrees for elevation. These values were comparable to or better than those obtained in previous studies using a loudspeaker. The average correct-response rates for rotation direction (either horizontal or vertical) were equal to or more than 80%. To identify acoustic cues that have caused the listener's accurate performance, the acoustic transfer characteristics from the speaker's mouth to the listener's ears were first measured by the cross spectral method. Then, the results were compared with those calculated by computer simulation using the finite difference time domain method to mutually compensate potential inaccuracies of each method, i.e., the unstable nature of actual measurements using a living body and systematic errors caused by the inevitable simplification in numerical simulation. The results suggested that major cues included but were not limited to the overall level and spectral tilt for the front back or up down judgment, and the interaural level difference for the left right judgment. These results provided a clue as to making a more realistic sound track in multi-modal telecommunications.
Japan Advanced Institute of Science and Technology, Nomi, Ishikawa, Japan
ABSTRACT
We have measured masked thresholds with/without the presentation of cue tone in which the same tone was presented before the signal in simultaneous notched-noise masking. The signal and cue-tone frequencies used in the experiments were only 1 kHz. We then estimated the shapes of auditory filters from these data and found that the tip of the derived filter was sharpened by the presence of cue tone as the signal level decreases. These findings suggest that psychophysical frequency selectivity can be improved by the presence of cue tone. However, we have not yet investigated whether the improvement in psychophysical frequency selectivity can also be affected by the frequency of the signal and cue tone. It is already known that the frequency selectivity varies with the signal frequencies. The effect of the cue tone may also be varied by signal frequencies. Our questions in this study are to find out whether the presentation effect of cue tone varies by the signal frequency, and whether the difference between the frequency of the signal and cue tone affects the frequency selectivity. We measured masked thresholds with/without cue tone used by various frequencies of signal and cue-tone. The signal frequencies (fcs) were 0.5, 1.0, 2.0, and 4.0 kHz. The cue-tone frequencies were 0.7fc, 0.8fc, 0.9fc, 1.0fc, 1.1fc, 1.2fc, and 1.3fc. The results showed that the similar tendency as the previous study appeared in which fcs were 1.0 and 2.0 kHz. However, when fc was 0.5 kHz, the increase for the masked threshold decreased and when fc was 4.0 kHz, the effect disappeared. These results clearly show that the presentation effect of cue tone is changed by the signal frequency. Moreover, the results that the masked threshold was increased appeared only when the cue-tone frequency was the same as the signal frequency. These results suggest that the correspondence between frequency of signal and cue tone is one of the most important factors of the improvement of the signal detection.
(1) Philips Research Europe, Eindhoven, The Netherlands (2) Human Technology Interaction, Eindhoven University of Technology, Eindhoven, The Netherlands
ABSTRACT
Humans are highly sensitive to Interaural Time Differences (ITDs) in stimuli presented via headphones. For broadband noise stimuli of long durations, ITD detection thresholds can be as low as 10 to 15 s. When the stimulus duration is shortened, thresholds increase by somewhat less than a factor 2 for a tenfold decrease in duration. ITD thresholds also increase, when the probe carrying an ITD is surrounded by diotic fringes. When a 5-ms probe is combined with preceding
and/or trailing fringes resulting in a stimulus of 50 ms, the effect of a fringe preceding the probe is stronger than that of a trailing fringe for fringe durations < 35 ms. The effect of fringes surrounding the probe is equal to the addition of the effects of the individual fringes.
In this contribution, we present behavioral data for the same experimental condition, called dynamically varying ITD detection, but for a wider range of probe and fringe durations. Probe durations varied between 5 and 400 ms, and fringe durations had values of 5, 20, 100 or 200 ms. In contrast to earlier findings, we observed for most duration combinations a stronger effect of the trailing fringe than of the preceding fringe. For these configurations, the effect of surrounding fringes was dominated by the trailing fringe. Only for the combination of 5-ms fringes with 5-ms probes did we see the clear dominance of the preceding fringe. These results are not easy to align with the concept of onset emphasis often used to explain binaural localization data for short stimuli. In fact the data seem to be difficult to predict with a purely signal-driven model of perception and thus form an interesting challenge for modeling human localization.
The University of Sydney, NSW, Australia
ABSTRACT
This paper examines effects of listening level and reverberation time on the reverberance of running musical stimuli. A listening test was conducted which tested an anechoic music stimulus convolved with synthetic RIRs having a range of listening levels and reverberation times: in the test, subjects adjusted the reverberance of a musical stimulus (by adjusting the decay rate of an impulse response convolved with dry music) to match that of reference stimuli. In this way, we constructed equal reverberance contours as a function of sound pressure level and reverberation time. The experiment results confirm that the listening level and reverberation time both have a significant effect on rever-berance. Loudness-based predictors of reverberance outperform the conventional reverberance predictors.
The Hong Kong Polytechnic University, Hong Kong, P.R.China
ABSTRACT
Annoyance has been identified as the most important psychological impact arisen from noise. Recent studies showed that individuals having different socioeconomic status and their residing neighbourhood characteristics make them perceive noise differently. In order to reveal the major modifiers for noise annoyance, an ordered logit model has been applied to analyse the effects of some possible modifiers of noise annoyance for city dwellers encountered at homes in Hong Kong. The model has been formulated by including personal factors like age, education level, gender, marrital status, noise sensitivity, self-rated health conditions and perceiveness of nearby green areas. Data were collected through face-to-face interviews and a total of 624 interviews were administered. Results indicated that the respondents' education attainment, noise sensitivity, self-rated health and their perceiveness of nearby green areas significantly affected noise annoyance perceived at their homes. In particular, the perceiveness of nearby green areas has a stronger effect on perceived annoyance than any other factors. The findings should be of paramount importance to urban city planners.
(1) Department of Psychology, Peking University, P.R.China (2) Department of Machine Intelligence, Peking University, P.R.China (3) Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), Peking University, P.R.China
ABSTRACT
Frequency-dependent transient storage of acoustic fine structures is critical for both temporal integration of correlated signals and releasing speech from informational masking under (simulated) reverberant environments. To investigate whether the humans' ability to transiently store acoustic details is generally associated with recognition of speech in noisy, reverberant environments, we invited 30 young-adult listeners to participate in two experiments, one examining their abilities to temporally store fine-structure information of either wideband or narrowband noises and the other one examining their recognition of target speech against speech masking at various simulated reverberant conditions. In Experiment 1, a break in correlation (BIC) between interaurally correlated wideband or narrowband noises, which were presented by headphones, was detectable even when an interaural interval (IAI) was introduced. The longest IAI, at which the BIC in either wideband or narrowband noises was detectable, varied markedly across participants and decreased monotonically as the center frequency increased for narrowband noises. In Experiment 2, target speech was presented by two spatially separated loudspeakers, one simulating the source and the other simulating a reflection. Uncorrelated two-talker speech maskers were also presented by these two loudspeakers. Recognition of target speech markedly improved when the interval between target-speech source and its single-reflection simulation (inter-target interval, ITI) was reduced from 64 to 0 ms. Target-speech intelligibility under simulated reverberant conditions (ITI ≧ 16 ms) significantly correlated with both the longest IAI for detecting the BIC in wideband noise and the longest IAI for detecting BIC in narrowband noises, particularly low-center-frequency narrowband noises. These results indicate that the ability to temporally store acoustic fine structures, particularly low-frequency fine structures, is functionally associated with recognition of target speech under informational-masking conditions in reverberant environments.
Nagano College of Nursing, Nagano, Japan
ABSTRACT
Impressions of university students regarding the sounds in rural nature area were investigated using a questionnaire survey and three experiments. In the questionnaire survey, urban university student participants (n = 119) were asked to indicate (1) sounds that made them feel comfortable, (2) sounds that made them feel uncomfortable, and (3) sounds that reminded them of nature. Many participants responded that (1) the sounds of nature were comfortable, (2) miscellaneous noises were uncomfortable, and (3) that sounds of the wind, rain, streams, and birdsongs reminded them of nature. These results suggest that most natural sounds in rural nature area are comfortable. In the first experiment, urban university student participants (n = 61) rated their impressions of sounds heard in the rural nature area. Their ratings varied, but many found certain sounds to be nostalgic. In the second experiment, urban university student participant (n = 31) rated their impression of the landscapes they imagined for the same sounds as in the first experiment. Their ratings varied, but many participants rated some of the landscapes they imagined as being nostalgic. In the third experiment, university student participants (n = 30) living in rural nature area rated their impressions of the sounds and the related landscapes for same sounds as in the previous experiments. As a result, both ratings were rather weak and the patterns of the impressions were similar between the sounds and the related landscapes. It suggests that familiarity to a sound may weaken the impression of the sound as well as of the image of landscape created by the sound.
AG Technische Akustik, MMK, TU München, Germany
ABSTRACT
Judgements of loudness play an important role in basic and applied psychoacoustics, for example in the fields of sound-quality engineering or noise abatement. Although loudness mainly depends on physical properties of the sound like level, duration, or spectrum, studies have shown that also visual factors may play a role during the perception and/or judgement of loudness. This contribution focuses on visual stimuli of different colours presented synchronously to sounds during loudness judgements. A number of studies were conducted to better understand this phenomenon and shed some light on possible factors influencing these audio-visual interactions. Results of selected studies are given and discussed with regard to the type of visual stimulus (e.g. synthetic images, pictures of objects), mode of presentation (e.g. monitor, projection screen), connection with the acoustical stimulus (plausible/implausible scenario), and other factors. In general, it was found that some colours are able to increase or decrease loudness judgements, but the effects showed large interindividual variability. Some subjects were apparently not influenced by the presented visual stimuli, while others over- or underestimated loudness by about 1 to 5% with maxima up to 9%. Colours like red or pink seem to cause an increase in loudness, grey or pale green were observed to decrease loudness.
TU Dresden, Germany
ABSTRACT
Spatial reproduction in a conventional stereophonic audio system (e.g., stereo or 5.1 surround) works in a small area known as the "sweet spot". If the listener changes his position, the phantom source moves in the same direction and finally collapses into the nearer loudspeaker. A play-back system that adjusts the loudspeaker signals depending only on the listener's position in real-time was evaluated in a previous study. Additionally, the orientation of the head in relation to the loudspeaker setup has an influence on phantom source localization. Localization errors that occur when the head is turned are discussed in this article. For this purpose a binaural localization model is used. It shows that the auditory event moves towards the median plane of the listener. This effect becomes stronger as the original phantom source position deviates further from the median plane. A compensation function is proposed and evaluated. Stable phantom source localization can be achieved using adaptive signal adjustment depending on the listener position and orientation. A demo version can be downloaded at www.sweetspotter.de.
(1) Laboratoire de Mécanique et d'Acoustique-CNRS, Marseille, France (2) Institut de Recherche et de Coordination Acoustique/Musique (STMS-IRCAM-CNRS), Paris, France
ABSTRACT
How do listeners judge the global loudness of non-stationary sounds ? For long sounds (dozens of seconds), some authors shown that the global loudness is mainly influenced by the louder part of a sound sequence. Others have found that the end of the signal has a prominent action on the global loudness perception, especially if the louder part is located at the end of the sound sequence. In this study, shorter sounds (1.8s) were tested, varying linearly in intensity with either an increasing or decreasing ramp. Global loudness of 1-kHz tones, synthetic vowel sounds and white noises was judged by participants using a magnitude estimation procedure. Several ramp ranges were tested of either two sizes (15 and 30 dB). Moreover, the loudness functions were measured for 500ms stationary sounds (tone, vowel and white noise). It has been shown that global judgments of ramped sounds are close, but always slightly lower, to the loudness estimates of stationary sounds with amplitudes equal to those at the end of the ramps. The global loudness for ramp sizes of 15 dB was louder that for ramp sizes of 30 dB showing a loudness integration process over time. Therefore these results show that global loudness is largely dependent on the loudness at the end of the sound and to a lesser extent on the dynamic of the ramp. For ramp ranges of lower intensities, the global loudness estimates for ramp sizes of 15 dB is equal to the loudness estimates at the end of the sound. Thus, it seems that the integrating window depends on the ramp range. For damped sounds, the global loudness is close to the loudness of the beginning of the sound. The global judgments were higher for ramp sizes of 15 dB than for ramp sizes of 30 dB. For ramp sizes of 15 dB, and for lower intensities, the global loudness estimates are equal to the loudness at the beginning of the sound. Finally, at a given ramp range and a given ramp size, the global loudness of ramped sound is slightly greater than the global loudness of damped sound confirming previous result obtained for 1-kHz pure tones with a similar duration (2 sec). However the effect is reversed for ramp range of higher intensities for the white noise and ramp sizes of 30 dB. Further investigations are needed to examine this asymmetry for longer durations.
Tokyo City University, Japan
ABSTRACT
When a sound is presented through a loudspeaker set up just in front of a listener, it will be taken for granted that the sound will be localized at a fixed position of the loudspeaker even if the sound is ascending or descending in frequency. Is it true? The author investigated further whether it is true or not. In the experiment-1, the sounds used were pure tones, 1/3octave-band noises and piano-tones. Each sound consists of 22 tone-bursts with the duration of 0.25s ascending along the whole-tone scale over three octaves from C4 (262Hz) to C6 or descending from C6 to C4. Two ways of sound presentation were used. In one case (case-1), the sounds were presented through a fixed loudspeaker (SP-No.2) set up just in front of a listener with 8 cards numbered from 1 to 8 perpendicularly set up with 0.3m apart to each other. The SP-No.2 was attached to the card numbered 2. In the other case (case-2), they were presented through the same loudspeaker (SP-A) with seven dummy loudspeakers set up perpendicularly attached to the numbered cards. Before the experiment, 17 observers were instructed that the following sound to be presented to them had been processed in localization with a ingenious method although the sound was in fact processed nothing (case-A: deceptive instruction), or processed nothing (case-B: honest instruction). The results show that 15 observers perceived overshoot of the sound images in localization, that is, they perceived the sound images moved from a loudspeaker around SP-A to an upper position for the ascending sounds or a lower position for the descending sounds both in the case-1 and the casae-2. Neither in the case-A nor B, the instructions affected anything on the results. In the experiment-2, the ascending sounds and the descending sounds were successively concatenated over 1octave, 2octaves, and 3 octaves, respectively. The stimuli were presented at the fixed loudspeaker (case-a), or physically moved from loudspeaker 1 to 2 or 4 for the ascending part of the stimuli and subsequently moved from 2 or 4 to 1 for the rest (case-b). The results show that the observers perceived greater overshoot of the sound images in the experiment-2 than in the experiment-1. These results will imply that vertical localization will depend on the tone-heights of the sounds and the movement of them at least in this situation.
(1) School of Information Science, Japan Advanced Institute of Science and Technology, Japan (2) Research Institute of Electrical Communication, Tohoku University, Japan
ABSTRACT
Time and space are interdependent in perception. A most typical example is that temporal and spatial patterns of three successive stimuli, defining two inter-onset intervals and two spatial distances, can affect the experience of spatial and temporal variation, respectively. These effects are named the "Tau effect" and the "Kappa effect," and have been found in the visual and tactile modality. In regard to the auditory modality, most studies equate pitch space with ambient space and demonstrate that each of the temporal and pitch intervals can affect the perceptual pitch and temporal variations, respectively. A question of great interest is whether the interdependence between the temporal and ambient spatial variation, not the pitch variation, would be found when successive sounds differ in location. To investigate the effect of a temporal variation in a successive sound sequence on the experience of the ambient spatial variation, we measured subjective absolute positions and subjective differences of two neighboring distances when three successive sounds, A, B, and C, were presented to participants from different loudspeakers. The results showed that varying the time interval between A and B (t1) and between B and C (t2) affected the perceived spatial distances between the sounds. If, for example, t1 was larger than t2, the subjective distance between A and B (d1) was greater than that between B and C (d2), although the physical distances of d1 and d2 were equal. The results indicate that the typical Tau effect exists in the auditory modality. Furthermore, if a variation in a physical temporal pattern affects the experience of the ambient space, d1 and d2 should be perceived as equal when t1 is physically equal to t2. However, our results showed that when t1 was about 60 ms shorter than t2, d1 and d2 were perceived as equal. The illusory phenomenon in regard to auditory time perception, named time-shrinking, typically occurs when t2 is longer than t1 and when the difference between t2 and t1 is smaller than about 100 ms. In these time conditions, t2 is perceived to be much shorter than the physical interval because t2 is perceptually shrunk by adding t1. Thus, t1 and t2 were perceived as the same duration. These findings suggest that the perceptual, not physical, temporal pattern affects the experience of the auditory ambient space. The spatial information of successive sounds may be organized after a temporal structuring of a sound sequence.
Toyama Prefectural University, Toyama, Japan
ABSTRACT
This paper clarifies how much signal bandwidth is necessary for horizontal sound localization. Horizontal sound localization experiments were conducted with sixteen listeners using white noise, and fourteen listeners using high-pass noise whose cut-off frequency (Fc) was 2, 4, 8, 12, or 16 kHz, or low-pass noise whose Fc was 0.5, 1, 2, 4, or 8 kHz. Four listeners participated in an experiment on band-pass noise whose FcL-FcH was 2-12, 4-12, 2-8, 2-4, 4-8, or 8-12 kHz. It was very difficult to localize sound for high-pass noise whose Fc was high. Sound localization performance was 64% for 12-kHz high-pass noise while it was 27% for 16-kHz high-pass noise. In contrast, sound localization was possible even for 500-Hz low-pass noise. Sound localization performance was 81% for 1-kHz low-pass noise and 67% for that at 500-Hz. It was also difficult to localize sound for narrowband band-pass noise. Sound localization performance was 56% for 2-4-kHz band-pass noise and 67% for that at 8-12-kHz. These results suggest that the interaural time difference, mainly calculated from low-frequency components, the interaural level difference, mainly calculated from high-frequency components, and spectral cues, mainly calculated from middle-frequency (5-10 kHz) components each play roles in sound localization for band-limited noise. We clarified that signal bandwidth from 2 kHz to 12 kHz is necessary to perform good horizontal-sound localization.
Kanazawa Institute of Technology, Nonoichi, Japan
ABSTRACT
Musical critics often point out that some popular singers start to sing slightly after the accompaniment to give a groovy feeling. In our previous study, we revealed that a Japanese Pop diva, Namie Amuro started to sing approximately 70-90 ms after the accompaniment for the initial notes of phrases, in her magnum opus, NEVER END. We then conducted a perceptual experiment for the delayed singing style: The performances of an excerpt of NEVER END were synthesized. The singing melody was played by a brass instrument tone. The onset of the melody tone was set at various timings to the accompaniment for the initial notes of the phrases. The results of the study showed that 0-70 ms delayed performances were perceived as natural and groovy. The 90-ms delayed performance was neither natural nor groovy, in spite of the fact that Amuro realized a 90-ms delay for several notes. This discrepancy may be caused by whether words were sung or not.
In the present study, a perceptual experiment was conducted to clarify this. In the first session, the vocal part of the excerpt was synthesized using a character vocal synthesizer HATSUNE MIKU. Then, the onset of the initial notes of the phrases was manipulated in various timing to the accompaniment, using the high-quality speech manipulation system STRAIGHT, and mixed with the accompaniment played by a MIDI system. These performances were evaluated using Sheffé's paired comparison method. Five listeners listened to each pair of performances and were requested to compare the degrees of naturalness and groove between the former and later performances and rate them seven step categories. In these performances, the words were sung. In the second session, HATSUNE MIKU sung /ne/ for all melody notes (scat style) and manipulated the timing using STRAIGHT. In the third session a human amateur singer sung words, and in the last session the singer sung /ne/ for all melody notes. The results showed that 90-ms delayed singing was perceived as natural and groovy for the case that the words were sung. However, in the case of the scats, 90-ms delayed performance was perceived as significantly inferior to the performances with 0-70 ms delay. The results for the scats are consistent with the case of the brass tone. These results show that we have a higher tolerance for the delayed play in the case of words being sung than in the case of a single timbre playing the melody.
(1) Arkamys, Paris, France (2) CNRS-LIMSI, Université Paris Sud, Orsay, France
ABSTRACT
Auditory virtual environments are becoming increasingly relevant for applications such as teleconferencing, hearing aids, video games, and general immersive listening. To enable high fidelity renderings of the sound scene in such en-vironments, the audio content must be treated with the actual listener's acoustical filters, the so-called head-related transfer functions (HRTFs). The current challenge for general public applications, given the difficulty of measuring HRTFs for a given listener, is to be able to individually generate HRTFs or perform a selection from a database of pre-existing HRTFs, so as to provide the listener an HRTF that enables a listening experience that is as realistic as possible, using for example only data taken from a photo of the listener's ear. A process is described in which a data-base of 46 measured HRTFs was analysed using various data reduction techniques such as principal component an-alysis and frequency scaling. A selection of the subjects' most significant morphological parameters was performed using data mining techniques such as support vector machines. This subset of morphological parameters for subjects associated to the HRTF database were then used to perform multiple linear regressions against the reduced dataset of HRTFs in order to predict what might be the listener's preferred HRTFs. The prediction performance was then com-pared to the results of a perceptual evaluation of the HRTFs from the database using a listening test. The results show that the proposed process was able to predict preferred HRTFs for a listener significantly better than if the HRTFs were chosen at random. The results from the listening test were also used to explore a perceptually relevant frequency range of the HRTF.
Faculty of Architecture, Design and Planning, University of Sydney, NSW, Australia
ABSTRACT
The ability of listeners to identify which of two simultaneously presented stimuli exhibited a sudden increase in periodic pitch modulation (vibrato) was examined in a divided attention task. While it is well established that stream segregation is influenced by both timbral differences and spatial separation between simultaneously presented stimuli, the interaction between these two salient factors had not been systematically investigated for such a task. The results of the study reported here showed that the facilitation of identification performance due to spatial separation of sources differing in timbre was greatest when those stimuli differed least in timbre. In particular, when two simultaneously presented sounds had very distinct vowel coloration, spatial separation of the two did not improve the ease of identifying which of the two sources exhibited a sudden increase in periodic pitch modulation. By quantifying vowel coloration differences for pairs of stimuli in terms of the Euclidean distance between their first two formant frequencies, it was shown that identification performance for stimuli exhibiting the most similar vowel coloration was most affected by spatial separation. In all cases tested, however, identification performance for spatially separated stimuli was superior to that for co-located stimuli. The results reveal the relative salience of these two prominent factors, spatial separation and timbral difference, in determining the effectiveness of concurrent auditory information displays.
Cochlear Ltd., Sydney, Australia Bionic Ear Insitute, Melbourne, Australia
ABSTRACT
Researchers are often forced to resort to esoteric acoustic stimuli to probe the finer nuances of pitch perception in normal hearing. Cochlear implants offer an alternative research tool, where place and temporal cues to pitch can be manipulated completely independently, allowing pitch perception models to be tested in ways that are not possible with normal hearing subjects. Temporal models of pitch perception postulate that pitch depends on auditory nerve inter-spike intervals. These models can be assessed with cochlear implant recipients by stimulating a single electrode with various pulse timing patterns. Pitch-ranking results using on-off modulated pulse trains were not consistent with the popular auto-correlation model, but were consistent with a model that analyses first-order inter-spike intervals. Cochlear implant recipients performed a melody perception test with melodies presented by varying the pulse rate on a single electrode. Scores were similar to those of normal hearing subjects listening to tones containing only unresolved harmonics. Such tones contain only temporal cues to pitch and evoke a weak pitch sensation. Normal hearing subjects obtained much higher scores for tones with resolved harmonics.
Cochlear implant place cues were investigated by choosing a fixed pulse rate, high enough to avoid temporal cues (1800 pps), and then varying the electrode position. Pitch-ranking results showed that when a group of neighbouring electrodes were stimulated, the place percept depended on the centroid of the stimulation pattern. Cochlear implant recipients, listening to melodies presented by varying the centroid of the stimulation pattern, were able to detect incorrect musical intervals. Earlier research has shown that cochlear implant place and temporal cues form independent perceptual dimensions. It is surprising that these two dissimilar perceptual attributes can each evoke a melody. No present models of pitch perception are able to explain this paradox. It is hypothesised that in normal hearing, a resolved harmonic produces a strong pitch sensation because of the specific phase relationship between the nerve firing times across a local region of the cochlea. Stimuli that do not produce the ideal spatio-temporal pattern can still evoke a weak pitch sensation that allows above-chance performance on pitch and melody tasks.
(1) Organization for Academic Information, Yamaguchi University, Japan (2) Graduate School of Science and Technology, Shinshu University, Japan (3) Faculty of Engineering, Shinshu University, Japan
ABSTRACT
To create a comfortable sound environment in which mental tasks are performed, it is important to understand the relationships between the characteristics of external acoustical noise and the physiopsychological evaluation. When carrying out intellectual activities involving memory or arithmetic tasks, it is common for noise to cause an increased psychological impression of annoyance, leading to a decline in performance. This tendency is more apparent for meaningful noise, such as music and conversation, than it is for meaningless noise, such as road traffic noise. Hence, to design a comfortable sound environment, it is very important to quantitatively understand the relationship between not only the measurable aspects, such as the sound pressure level of the noise, but also the qualitative aspects, such as the degree of meaningfulness of the noise and the psychological impression of annoyance. This paper describes the physiopsychological effect of meaningful noise. Specifically, the authors first focus on the degree of meaningfulness of the noise and then discuss how the brain responds during mental tasks.
Transient event-related potentials (ERPs), which are elicited by any internal or external stimulus, can be measured using electroencephalography. The N100 component of ERPs is a negative-going evoked potential that peaks around 100 ms after the onset of a stimulus. It may represent the activation of neural assemblies involved with the analysis of incoming sensory information. Also, the P300 component of ERPs is a positive peak about 300 ms after presentation of the stimulus in response to the detected signals. It is thought to reflect the resolution of the uncertainty or the perceptual decision that an expected signal has occurred. The peak amplitude and latency of these components have relevance to selective attention and working memory operation. The present experiment was designed to determine the effects of meaningfulness of the noise on selective attention to visual or auditory target stimuli in the oddball paradigm or repetitive probe-digit tasks by examining differences in peak amplitudes and latencies of the ERP components. In addition, the authors considered whether relationships between the characteristics of ERPs, the psychological impression of annoyance by the noise, and performance correlate to an index, such as the percentage of correct answers or reaction time. The results suggest that whether the noise is meaningless or meaningful has a great influence on selective attention to stimuli in mental tasks, which is related to its psychological impression of annoyance and task performance.
(1) Department of Physics, Nagpur University, Nagpur, India (2) Department of Electronics, Nagpur University, Nagpur, India (3) Laboratory of Acoustics, Faculty of Engineering, University of Porto, Porto, Portugal
ABSTRACT
The human whistle is a representation of the human vocal singing. Singing (solo and congregational) is an essential component of sacred music for collective worship in a Catholic church. The acoustic characterization of sacred music is defined in this paper through a derived Acoustic Comfort Impression Index (ACII) and several Acoustic Worship Indices (AWI), namely, Subjective Sacred Factor (SSaF), Subjective Intelligibility Factor (SInF) and Subjective Silence Factor (SSiF). In this study, live sacred music rendered by the human whistle is compared with that by the cello, clarinet, violins and the ensemble, in the Catholic church of the Divine Providence (Goa, India). Among the significant results, ACII for the human whistle was found to be better than ACII for the musical instruments (F = 2.38, p = 0.08); this difference was more significant at the nave of the church (music source) (F = 2.94, p = 0.04) and lower at the choir loft (music source) (p = 0.21). SInF for the ensemble music was found better than SInF for human whistle (F = 3.07, p = 0.03). At the nave of the church, the SInF was found better than SSaF and SSiF (F = 4.17, p = 0.02). SSaF and SInF were equally better than SSiF at the choir loft (p = 0.02). This study opens the possibility of optimized use of the human whistle in rendering sacred music in a church.
Electroacoustic Graduate program, Feng Chia University, Taichung 40724, Taiwan (ROC)
ABSTRACT
Sound quality is an important issue in sound products today, covering a range of fields form music performance to mechanical noise, and is related to human aural response. Many measurement assessment items for sound quality have been defined including frequency and loudness. Music also includes all subjective characteristics of sound. Sound allows people to appreciate their surroundings through auditory organ, and listeners naturally anticipate enjoyment of music. In brief, "timbre" is determined by "hearing sensation" and "satisfaction" is determined by both "sound imaging" and artistic contents. When music is played, listeners pay attention to "hearing sensation" first and "satisfaction" second. But "timbre of feeling" is difficult to express objectively as listeners' subjective feelings cannot be accurately measured by acoustic measurement equipment. To combine objective analysis and psychoacoustics to reinterpret the ratio of "timbre of feeling" in sound quality, this paper presents an assessment model for the sound quality of audio performance based on psychoacoustic theory. The model incorporates auditory roughness and specific loudness that are deemed the causes of the quality of audio performance. From the model, the optimum curve for auditory roughness is presented. Furthermore, the hearing balance of high audio fidelity and hearing satisfaction rank are proposed. Experimental results show that the model can be applied not only to measure the sound quality of audio signal but also to assess sound quality qualitative comparisons of high fidelity loudspeakers. The results also demonstrate that the proposed assessment model is capable of expressing subjective sound quality successfully.
AG Technische Akustik, MMK, TU München, Germany
ABSTRACT
Binaural room synthesis is a sound reproduction technology that is, for a normal hearing listener, based on convolution of the sound signals to be reproduced with the impulse responses of the sound pressure propagation paths from the sources to be simulated to the listener's eardrums. The convolution products are then presented by adequately equalized, appropriate headphones. The simulation of arbitrary acoustical reproduction environments is possible as long as they can be regarded as linear systems, which is the case for almost all practical scenarios. If the signal processing is implemented with real-time capability and adaptively, the impulse responses can be adjusted based on the listener's head position and orientation, gathered by a tracking system. This allows the listener to move freely while the simulated acoustical scenario, encoded by the sound pressure signals at the eardrums, remains correct. For research and fitting purposes in the field of hearing aids or cochlear implants, binaural synthesis could save effort and time by providing arbitrary acoustical environments in the laboratory. Since conventional binaural synthesis is based on the reproduction of an original scene's sound pressure signals in front of the listener's eardrums, it is not directly applicable to listeners using hearing aids or cochlear implants, especially when the sound pressure at the eardrums is not present or not involved in the hearing process. If cochlear implants are used in connection with hearing aids or the remaining normal hearing system, as for example with electro-acoustical stimulation, the situation becomes even more complicated. Within this contribution, the conventional theory of binaural room synthesis is adapted to the typical applications with hearing impaired listeners. Further, possible application scenarios and advantages compared to traditional methods for research and fitting in the context of hearing aids and cochlear implants are discussed.
AG Technische Akustik, MMK, TU München, Germany
ABSTRACT
The psychophysical phenomenon called ventriloquism effect describes a possibly occurring influence of a visual stimulus on the judgment of a sound source's perceived position. The term minimum audible angle usually denotes the just distinguishable horizontal angular deviation between two sound sources. It depends on the direction of the sound sources relative to the listener's head, the considered sound stimuli, and the presentation sequence. Within this paper, a possible influence of the ventriloquism effect on the minimum audible angle for sequentially presented broadband noise is assessed with wave field synthesis and intensity panning as reproduction methods in a reflective environment, using an adaptive two-alternative forced choice procedure. As baseline, the minimum audible angle without any intended visual stimuli is determined in complete darkness for both reproduction methods. These baseline data are compared to absolute localization trial results for validation purposes. As visual stimuli, one concurrent and one contradictory light spot are added to the sound stimuli. A possibly occurring ventriloquism effect can reduce or enlarge the minimum audible angle depending on whether the visual stimulus' position change is larger or smaller than the change in sound source position. It is shown that visual stimuli can influence minimum audible angles for both reproduction methods, at which considerable differences occur inter-individually and between the considered reproduction methods.
AG Technische Akustik, MMK, TU München, Germany
ABSTRACT
Wave field synthesis is an audio reproduction procedure aiming to produce a sound field completely correct at least within the so called listening area. In theory, the exact synthesis is possible under certain circumstances based on the Kirchhoff-Helmholtz integral equation. For practical realizations, usually closed loudspeaker boxes are employed. However, using closed loudspeaker boxes leads to a deviation of the synthesized from the intended wave field and requires approximations in the derivation of the loudspeaker driving signals, since preliminaries of the Kirchhoff-Helmholtz integral equation are not fulfilled anymore. If wave field synthesis is employed in a typical listening environment, room influences further disturb the resulting sound field. Whether auditory perceptions generated by such a system equal those occurring in the reference scene is unclear. Within this contribution, loudness adjustments of narrow band noises presented by wave field synthesis to the free field case are shown. The results of these experiments under different room acoustical conditions are discussed and compared to a baseline, where a single loudspeaker in the same reproduction room was adjusted to the free field situation. Further, the acquired data are compared to the prognoses of a typical loudness model according to DIN 45 631/A1.
(1) Faculty of Systems Science and Technology, Akita Prefectural University, Japan (2) Graduate School of Systems Science and Technology, Akita Prefectural University, Japan
ABSTRACT
Simplification of head-related transfer functions (HRTFs) is important for effective implementation of their synthesis from computational point of view. Authors take following two points into account; 1) In low frequency region, the frequency characteristics of HRTFs are relatively independent of sound source directions. 2) In high frequency region, however the HRTFs have strong dependence of source directions, it can be found from the frequency resolution of the auditory system that the detailed spectral peaks and dips of the HRTFs are not evaluated. These points lead to enable the simplification of the HRTFs to some extent. In this paper, the frequency characteristics of both right and left ear HRTFs were flattened in the lower frequency region than a certain frequency, and the only HRTF on the contralateral side was flattened in the higher frequency region than a certain frequency. Those simplifications were applied so as to retain the interaural level difference and interaural time difference. To evaluate the influence of simplified HRTFs, a localization test in horizontal plane was carried out. The experimental results showed that the simplification to some extent particularly influenced front-back confusions. In this paper, it is concluded that HRTFs can be simplified for frequency range below 1 kHz and above 4 kHz.
Department of Mechanics and Vibroacoutics, AGH University of Science and Technology, Kraków, Poland
ABSTRACT
The article presents analysis of results of questionnaire to determine places and situations, in a big city, in which wave-vibration signal could be useful in orientation and mobility for blind and visual impaired persons. Examination of the questionnaire is the first stage to elaborate a conception of the system of training orientation supported on wave-vibration signals and touch sense. This paper provides analysis of hazardous places and particular important situations for blind people. Also analysis concerning on problems with mobility (downstairs and upstairs, train station) and orientation (which public buildings are the most important) in a big city. Results of the questionnaire are provided to obtain some answers for questions for example: In which situations in big city orientation based on wave-vibration signals is the most desirable and in which situations orientation based on wave-vibration signals is the most unsettled. Based on questionnaire results a hazardous places and situations will be chosen and a city situations library with vibration, acoustics and touch sense signals will be built. This tool with "wave-vibration situations" will be used for orientation and mobility training. Also in the paper there is presented a review of influence of environmental sounds on orientation and mobility of blind and visual impaired persons.
(1) Michigan Technological University, Houghton, MI, USA (2) The Petroleum Institute, Abu Dhabi, United Arab Emirates
ABSTRACT
A study on the characterization of the sound quality of transient sounds via fundamental psychoacoustic measures is described in this paper. Specifically, the overall subjective perception of annoyance for transient sounds was studied. Through magnitude estimation and paired comparison jury evaluation experiments, the subjective annoyance magnitudes of 15 transient sounds were determined. For each sound, several objective psychoacoustic measures were calculated, and using simple linear regression models, the relationships between these objective measures and the subjective annoyance magnitudes were investigated. Examined psychoacoustic measures included loudness, sharpness, roughness, fluctuation strength, tonality, and a new loudness-based measure of impulsiveness. The new impulsiveness measure is based on the summation of the magnitudes of impulse-induced peaks in the loudness time history for a sound (calculated according to DIN 45631/A1). The models were analyzed using several statistical measures of model significance and fit. It was found that for the transient sounds studied, significant relationships existed between subjective annoyance and each of the following psychoacoustic measures: loudness, sharpness, roughness, and loudness-based impulsiveness. These four measures were then combined into a single model for predicting subjective annoyance using multiple linear regression analysis. It was found that this model was highly correlated to the subjective annoyance of transient sounds.
Chuo University, Tokyo, Japan
ABSTRACT
Recently, various safety sensors of the vehicle are widespread. And, for warning sounds such as the pinching prevention of the power slide door of the vehicle, the importance of the warning system in consideration of senior persons whose audibility characteristic of the high frequency turns worse becomes higher. However, there are some issues of the warning system. One of issues is that the warning sound around 4,000Hz is about 25dB difference of audibility characteristic between a person in twenties and in sixties. Another issue is that the cognition of the direction from the warning sound might be difficult by the influence of inside and outside environmental noise of the vehicle.
In this study, the new methods for the improvement of cognition rate and the specific direction of the warning system are developed by using the complex stimuli with the sound of auditory sense and the vibration of haptic sense. At first, the suitable frequency and the stimulation time for sound of auditory sense are estimated experimentally. Next, the suitable position of the body, frequency and the stimulation time for vibration of haptic sense are estimated experimentally. In addition, the cognition rate of the sound source position can be improved by using the directivity of the warning sound inside the vehicle. Moreover, it is verified that the cognition rate in a direction different from the sound source position can be improved by using the complex stimuli by the sound and the vibration. As a result, it is confirmed that the cognition rate presented by the complex stimuli of sound and vibration is higher than that presented by the simple stimulus of sound or vibration at the driver position of the vehicle.
Graduate School of Engineering, Utsunomiya University, Japan
ABSTRACT
This study investigated the equivalent perception between a visual event and its associated sound when the sound pressure level (SPL) was varied. We performed an experiment of an auditory-visual stimulus presentation using an audio-video clip of a man beating a drum on a road. The visual stimulus had a feeling of depth with a perspective view of the road. We produced auditory-visual stimuli at presentation distances of 5, 10, 20, and 40 m under various conditions, where we varied the SPL of the auditory stimulus (drum sound) from -12 to 12 dB based on the measured SPL and the rate of the presentation distance from -40 to 40%. The visual stimulus was projected onto a screen that had the viewing angles of 30.8 degrees (W) x 16.1 degrees (H), and the auditory stimulus was reproduced via headphones. We presented the auditory-visual stimuli to the experimental subjects and asked to subjectively evaluate whether the size of the visual event was larger or smaller compared with that imagined from the strength of its associated sound. Then we estimated the subjective feeling of depth of the visual event, which is the visual distance matching with the SPL of the sound, in each presentation distance. As a result, we obtained that the subjective feeling of depth intended to decrease when the SPL increased, that is, the subjects perceived the visual event being nearer when the associated sound level became higher.
(1) Graduate School of Science and Technology, Ryukoku University, Shiga, Japan (2) Faculty of Science and Technology, Ryukoku University, Shiga, Japan
ABSTRACT
A mandolin tremolo is characterized by the average of the plucking rate, as well as the onset and amplitude deviations. Here, the fluctuation of a tremolo elicited with only the average plucking rate is called the "1st fluctuation", and that elicited with both onset and amplitude deviations is called the "2nd fluctuation". Although the fluctuation quantity of the tremolo with only the 1st fluctuation was calculated in our previous studies, that with both the 1st and 2nd fluctuations has not been investigated using the procedure, because the sensation of hearing fluctuation from it has not yet been investigated. Therefore, we propose indexes for calculating the fluctuation quantity of a mandolin tremolo with both the 1st and 2nd fluctuations. To develop the indexes, a procedure for calculating the fluctuation quantity of an imitated tremolo, employed here as stimuli, with both the 1st and 2nd fluctuations is investigated, and the sensation of hearing fluctuation on the tremolo is estimated using the calculated fluctuation quantity. The imitated tremolo is an AM complex tone whose envelope is identical to the tremolo so that it does not give us any musical impression but only the sensation of hearing fluctuation. In the investigation of the procedure, the index for the 1st fluctuation was from the procedure in our previous studies and the indexes for the 2nd fluctuation were calculated on the basis of the differences from a deviation trend curve, obtained by moving average, for a global tendency for each deviation. In addition, an experiment using the magnitude estimation method was conducted to subjectively evaluate the sensation of hearing fluctuation for these sounds. Next, multiple regression analysis was conducted to estimate the sensation of hearing fluctuation, in which evaluation results were used as objective variables and the indexes for 1st and 2nd fluctuations were used as explanatory variables. Finally, we obtained indexes for calculating the fluctuation quantity of a mandolin tremolo with both the 1st and 2nd fluctuations. We found that the procedure using both the 1st and 2nd fluctuations appropriately represents the sensation of hearing fluctuation (R2 > 0.75), which is better than when using only the 1st fluctuation (R2 = 0.55). Thus, we developed indexes for calculating the fluctuation quantity of a mandolin tremolo.
(1) Osaka Institute of Technology, Japan (2) Utsunomiya University, Japan
ABSTRACT
Human beings obtain much information by combining various sensations. The simultaneity perception characteristics of these sensations play an essential role to understand the information. In this study, we investigated the simultaneity perception characteristics of auditory stimuli considering attention. In the experiment, we presented two instantaneous sounds for evaluating the characteristics of simultaneity perception. The first sound was presented to left or right ear and the second sound was presented to the opposite side ear after presenting the first sound. The test sounds were 80 dB SPL and had a frequency of 1 kHz. The duration was 10 ms. And, the intervals between the first and second sounds were set at -80, -40, -20, -10, +10, +20, +40 +80 ms. Positive value means the first sound was presented to left ear and negative value means the first sound was presented to right ear. In this study, we would like to know the characteristics considering attention. Therefore, we prepared two kinds of previous sound group. The one of the group was employed for the attention test in which the subject gives an attention to one side of the ear, and the other was used for standard test in which the subject does not give an attention to one side of the ear. In the attention test, a previous sound, that was the same sound with the test sound, was presented 20 times to the attention ear (left/right ear) 200 ms before presenting the first test sound. In the standard test, the previous sound was presented either left or right ear randomly not to give an attention to one side of the ear. After presenting these previous and test sounds, we asked the subject "which sound did come firstly, left or right ear?" As results, selection probability of left ear advance was almost the same as that of right ear advance when the interval was less than 20 ms in the standard test. But in the attention test, the probability of attention ear advance was much higher than that of the opposite side ear (not attention ear) advance at the interval. From the result, it was clarified the attention affects the simultaneity perception characteristics of auditory stimuli, and the response toward the sound presented to the attention ear became faster relatively by the attention.
(1) Communication Research Laboratory, Dept. of Speech-Language Pathology and Audiology, and Institute for Hearing, Speech & Language, Northeastern University, Boston, MA, USA (2) Auditory Modeling and Processing Laboratory, Dept. of Speech-Language Pathology and Audiology, and CDSP Center, Dept. of Electrical and Computer Engineering, Northeastern University, Boston, MA, USA
ABSTRACT
Are conclusions about loudness drawn from tones presented via earphones in laboratories applicable to listening to a talker in a room? Textbooks state that a tone presented binaurally is louder than the same tone presented monaurally. This is called Binaural Loudness Summation, BLS. Recent data using speech stimuli from a visually present talker challenges conclusions drawn from classical binaural measurements obtained laboratories. This demonstrates the importance of ecological validity in loudness research, which could change how perception of loudness is understood. In 2009 we presented preliminary data showing that the amount of BLS is less for speech from a visually present talker than for recorded speech and tones. The present experiment builds on these earlier findings and tests the following hypothesis: speech presented under more ecologically valid conditions results in less BLS than speech presented without visual cues and/or presented via headphones. To answer this question, normal listeners were presented two types of stimuli (recorded speech, with and without visual cues) monaurally and binaurally across a wide range of levels. The same stimuli were presented via earphones and loudspeakers. Loudness was measured using magnitude estimation. Results show that the amount of BLS was significantly less for speech with visual cues presented via a loudspeaker than for stimuli with any other combination of test parameters (i.e., speech without visual cues presented via both headphones and loudspeakers, and speech presented with visual cues via headphones). The present results are consistent with our earlier data. They show that the loudness of a visually present talker in daily environments is little affected by switching between binaural and monaural listening. The phenomenon has been dubbed "Binaural Loudness Constancy," because of its similarity to loudness constancy that occurs with distance from the speaker. These data are highly likely to reveal an effect of higher-level processing on the loudness of everyday sounds in daily environments.
HEAD acoustics GmbH, Herzogenrath, Germany
ABSTRACT
Psychoacoustics has become increasingly important for community and environmental noise as well as soundscape research. Unfortunately, recent noise control approaches still interprets mostly sound pressure levels and does not focus on the subjects perception. However, there is growing consent about the necessity to apply further hearing-related parameters for a better understanding of environmental noise annoyance phenomena. In this context the identification of the most important psychoacoustic quantities reflecting human responses to noise is the major task.
Especially with regard to the preservation and creation of quiet areas (Q-zones) according to the EC 2002/49 advanced evaluation tools and meaningful acoustic indicators are necessary to fulfil the ambitious goals regarding the creation of acoustically green city areas. Concepts of banning disturbing vehicles from quiet zones could guarantee that defined noise limits are not exceeded. In this context vehicles could be classified with respect to their noise emissions with coloured badges comparable to fine particle stickers. However, for an effectual classification of the acoustic emission of vehicles, the human perception and evaluation of vehicle pass-by noise must be studied in detail and psychoacoustic parameters applied. Furthermore, a traffic synthesis tool for the auralisation of specific traffic scenarios is developed to simulate for example the effect of new vehicle types on the resulting traffic noises. The synthesis tool could be used in principle for (a) the creation of audible maps, (b) testing the perceptual efficiency of potential noise mitigation measures and (c) examining the influences of different traffic compositions or traffic management measures on the noise and noise annoyance respectively. The mentioned working tasks are carried out within the European research project "City Hush" with a special focus on the psychoacoustic evaluation of hybrid vehicles.
The current status of the development of psychoacoustic noise labelling of vehicles as well as of the traffic synthesis tool will be presented and first results introduced and discussed. In general, the potential of these approaches regarding the improvement of environment noise quality will be discussed from an ecological point of view.
(1) Seikei University, Tokyo, Japan (2) Japan Automobile Research Institute, Japan
ABSTRACT
This paper describes the desirable condition of road traffic noise for human hearing in terms of sound quality. In general, the reduction of the envelope fluctuation of the sound and the reduction of high frequency contents of the sound under consideration beyond 1 kHz are effective for realizing the desirable sound perception for machinery noise. This approach is also true for the environmental sound such as road traffic noise according to the result of our laboratory experiment from the view point of sound quality. At present, desirable sound environment is determined by the prescribed Laeq value, i.e., equivalent A-weighted sound pressure level. But from our experience, Even under the same Laeq values, our perception from various road traffic noise differ significantly because of the fact that their sound qualities are different within each other. This fact is examined in detail by laboratory experiment by utilizing the recorded real road traffic noise.
(1) Korea Advanced Institute of Science and Technology, Korea (2) Agency for Defence and Development, Korea
ABSTRACT
It is very important to apply the individual fitting algorithm for the hearing aids to the hearing impaired patient. This is in particular true for equipping the modern sophisticated digital hearing aids. Existing threshold-based fitting methods adopting a pure tone stimulus yield the same target gains when individual hearing thresholds are identical. Consequently, the loudness perception of an individual does not precisely reflected into the fitting data sometimes, thus a tedious re-fitting adjustment procedure should be done after an initial fitting. Other loudness-based fitting methods, employing fractional octave-band tone stimulus, often result in excessive gains at low frequencies and too many measurements for loudness level setting. In this study, as an attempt to alleviate the aforementioned problems, a new psychoacoustic fitting method is suggested. Subjects with normal hearing are tested and the loudness perception to a certain level of band-limited white noise at the modified 14 critical bands is classified by five categories. In this way, a standard database as the target fitting value is constructed by processing the test results in the statistical manner. A hearing impaired patient is subject to the same test procedure and the perceptual response data are used for estimating the individual hearing characteristics. Measured hearing loss data are compared with the database of standard normal hearing and, then, the target gains for five loudness categories are obtained for compensation. Comparisons were made between proposed and existing fitting methods for some patients. The results revealed that many patients felt better auditory performance after wearing hearing aids fitted by the present method than the existing algorithms even after the empirical re-adjustment.
(1) College of Art, Nihon University, Japan (2) Hitotsubashi University, Japan
ABSTRACT
Our purpose is to create real sound made by human motion in the virtual reality environment. On Japanese traditional dance that is called "Nihon Buyo", the sound of footsteps is very important because dancer makes musical beats by footsteps in his performance. We tried to generate these footsteps using dancers motion for the purpose of producing the virtual reality performance of Nihon Buyo. In the real environment, the material vibration creates sound. This is the fundamental principle of physical phenomena. If we give the something power to the material, it starts to vibrate; this is most simply phenomenon of sound. So we had to create the material vibration in the virtual environment. We tried to generate the footsteps sound by motion capture data. To create the real sound in the computer environment, we have to simulate material vibration by the excitation. If we can simulate the material vibration, we could create same environment as real world in the computer environment. To generate the virtual reality footsteps sound, we used the physical modeling that was the calculation of the modeling elasticity, and moreover we used Finite Element Method (FEM) to simulate the material vibration, which was wood floor vibration.
At first, we measured the floor vibration due to the foot of the dancers to estimate the vibration by the physical model as the vibration of the floor by the real human movement. We put on three places of contact type vibration sensor on the floor and measured it. To make the material vibration, off course, excitation is needed. For footsteps, excitation is just the motion of foot. On this modeling, we used Z-axis motion capture data of performers' heel as data of excitation. About the physical modeling, we made the DSP program that translated the movement of foot to excitation data, based on Modalys that IRCAM developed. And the footsteps sound was generated with dancers motion data and the elastic value of wood. As a result of estimation, it was suggested that there seemed to be the indication of correlation between the real vibration and modeling sound.
Biohousing Research Institute, Chonnam National University, Korea
ABSTRACT
A lot of researches have been performed on the subjective response for transportation noise like aircraft, railway and road traffic noise and find their relationship. However it is not easy to make the relationship clear because the subjective responses are appeared depending on the country, society and background. This study tried to examine the effect of exposure time of transportation noise on the subjective response. Road traffic noise is generally produced continuously, while aircraft noise and railway noise are intermittently produced. Therefore, the effect of noise exposure time and the combined effect were analyzed.
(1) Osaka University, Osaka, Japan (2) Panasonic Co. Ltd., Japan
ABSTRACT
Many home electric appliances are used at home and their sounds are often perceived as being unwanted sounds. It is desired to reduce their unpleasant impression. The reduction of sound level is a basic technology to improve the sound quality of machinery noises. Moreover, if the sounds include pure tone components and impulsive components, the elimination of these components may be an effective countermeasure.
Three experiments were conducted to examine the effect of the deteriorating factors on the impression of the sound quality of home electric appliances. The frequency components and the envelope patterns were modified after careful listening to the sounds from various home electric appliances. Forty-one sounds including original sounds and their modified sounds were used in the experiments. In Experiment 1, the impression of sound quality was judged using semantic differential. In Experiment 2, paired comparison test was conduced whether modification of sound quality can be detected. In Experiment 3, all the sounds were connected in random sequence with a slight interval between the sounds and the instantaneous impression was judged using the method of continuous judgment. The participants were requested to respond when they noticed bad impression of the sound during TV watching. This situation simulates the daily life. The results of the three experiments suggests that the sound level has an important effect on sound quality and the reduction of impulsive sounds and high frequency components improved the sound quality when the reduction of these components accompanies the reduction of sound level.
College of Wooster, Wayne County, Ohio, USA
ABSTRACT
Ecological psychoacoustics emphasizes the study of the perception of sounds that occur in natural environments outside the laboratory. Historically, much of the work in this area has focused on environmental sounds that exclude music and speech. However, music and speech have been some of the most frequently occurring natural sounds for quite a long time. This talk will review recent cross cultural empirical findings that link the perception and production of music to the perception and production of speech. In addition, it will show that the perception of both music and speech have links to the perception of many other naturally occurring sounds.
(1) Dept. of Communication Design Science, Faculty of Design, Kyushu University, Japan (2) Dept. of Information and Media Studies, Faculty of Global Communication, University of Nagasaki, Japan (3) Nippon Telegraph and Telephone East Corp., Japan
ABSTRACT
Humans represent sounds to others and receive information about sounds from others using onomatopoeia. Such representation is useful for obtaining and reporting the acoustic features and impressions of actual sounds without having to hear or emit them. But how accurately can we obtain such sound information from onomatopoeic representations? To examine the validity and applicability of using verbal representations to obtain sound information, rating experiments were carried out in which the participants evaluated auditory imagery associated with major and minor onomatopoeic representations created by listeners of various environmental sounds. A "major" onomatopoeic representation is regarded as being frequently described by many listeners of the sound, that is, a "typical" onomatopoeia, whereas a "minor" onomatopoeic representation is regarded as a unique representation for which there is a relative smaller possibility that a listener of the sound would actually use the representation to describe it. Results of comparisons of impressions between real sounds and onomatopoeic stimuli, impressions of sharpness and brightness for real sounds and both major and minor onomatopoeic stimuli were similar, as were emotional impressions such as pleasantness for real sounds and major onomatopoeic stimuli. The auditory imagery of powerfulness associated with onomatopoeia was different from the same impression of real sounds. Furthermore, participants provided answers to questions asking about the sound sources themselves or the phenomena that create the sounds associated with the onomatopoeic stimuli. The percentage of participants who correctly recognized the sound source or the phenomenon creating the sound was calculated for each onomatopoeic stimulus. The percentage of correct answers averaged across all major onomatopoeic stimuli was 64.3%. On the other hand, the same percentage for minor onomatopoeic stimuli was 24.3%. Furthermore, differences between impressions for the onomatopoeic representations and those for the corresponding sound stimuli were compared between two groups of onomatopoeic stimuli, that is, one group comprised of onomatopoeic stimuli for which more than 50% of the participants correctly answered the sound source question, and another group comprised of those for which less than 50% of the participants correctly answered the sound source question.. The difference of the emotional impression in the group of onomatopoeic representations in which participants had higher sound source recognition was significantly smaller than that in the other group. It can be said that recognition of the sound source from onomatopoeic stimuli affected the emotional impression similarity between real sounds and onomatopoeia.
Institute of Physics, Oldenburg University, Oldenburg, Germany
ABSTRACT
Large fans often emit complex tones with a fundamental tone and a large number of harmonics. Two fans at different rotational speeds may generate complex tones with different fundamental frequencies and interaction tones'. The perception those large tone complexes has many aspects and varies according to the frequency ratio of the fundamentals, where only small ratio changes may lead to considerable changes e.g. in the pleasantness of the sounds. The aim of the study is to identify the perceptual space for these huge sets of tones and to investigate the impact of the fundamental's frequency ratio on the different perceptual dimensions. A semantic differential is used to determine denotative and connotative properties of fifteen different mixed complex tones. A factor analysis provides factors that have specific relations to the three Namba factors pleasant', powerful' and metallic'. According to the high number of aspects in the perception of the multiple complex tones the timbre factor metallic' is spilt up into different factors related to the spectral and temporal structures of the sounds. Details will be given in the presentation.
(1) Department of Urban and Civil Engineering, Harbin Institute of Technology Shenzhen Graduate School, Shenzhen, P.R.China (2) School of Architecture, University of Sheffield, Sheffield, UK
ABSTRACT
Subjective evaluation of individual sounds is an important aspect of soundscape research. In this paper, a study of sound preference evaluations in urban open spaces is discussed while artificial neural network (ANN) models for predicting the subjective evaluations of sound preference are developed. The impact of various factors on sound preference evaluations is statistically analysed based on indoor experiments and outdoor surveys. The importance of various sounds' physical and psychological factors, and listeners' social and behavioural situations for sound preference evaluations are examined. With respect to such importance, input variables for ANN models are selected. A number of ANN models are explored in terms of complicated relationships between various factors and sound preference evaluations according to different study cases. Four kinds of models have been built, namely general, individual, group, and lab models. The results show that the lab models make good predictions whereas the prediction performance of the other models is not satisfactory. No significant differences of predictions have been found among general, individual and group models, indicating that the impact of different locations is trivial on sound preference evaluations. Furthermore, a mapping technique is proposed in order to directly assist urban planners/designers.
(1) Université de Cergy-Pontoise, LMRTE, Cergy-Pontoise, France (2) Ecole Nationale des Travaux Publics de l'Etat, CNRS, Département Génie Civil et Bâtiment, Vaulx-en-Velin, France (3) Ministère de l'Ecologie, de l'Energie, du Développement Durable et de la Mer, Mission Bruit et Agents Physique, La Défense, France (4) Université de Cergy-Pontoise, ETIS, Cergy-Pontoise, France
ABSTRACT
The purpose of this study is to develop a predictive model of urban sound quality from field survey data using multiple linear regressions and artificial neural networks (ANNs). In order to determine a soundscape pleasantness model, passers-by were asked to assess their environment mainly from an acoustic point of view but also from a global perspective (visual and air quality). Users were asked to evaluate the sound environment firstly as a whole and secondly listening to each perceived sound source.
The investigation took place at the "Parc de la Tete d'Or" which is an urban park in the French city of Lyon, in two locations on both sides of the main park access. One hundred and twenty subjects, divided equally between the two locations and the three periods of the day (morning, afternoon and evening), were interviewed. Each one had to evaluate twenty-six subjective variables on a rating scale from 0 to 10. To propose a relationship between the soundscape pleasantness and the others twenty-five assessed variables, the collected data have been analyzed according two models: multiple linear regressions and predictive method based on artificial neural networks were used and compared. The first method is useful to understand which variables explain the assessment of the soundscape pleasantness, but not the second one which can be considered as a "black box". However ANNs seem to better predict the soundscape pleasantness when a new set of data is tested.
Department of Information Technology, Ghent University, Ghent, Belgium
ABSTRACT
A computational model of auditory attention to environmental sound, inspired by the structure of the human auditory system, is presented. The model simulates how listeners switch their attention over time between different auditory streams, based on bottom-up and top-down cues. The bottom-up cues are determined by the time-dependent saliency of each stream. The latter is calculated on the basis of an auditory saliency map, which encodes the intensity and the amount of spectral and temporal irregularities of the sound, and binary spectro-temporal masks for the different streams. The top-down cues are determined by the amount of volitional focusing on particular auditory streams. A competitive winner-takes-all mechanism, which balances the activation and inhibition of each stream, determines which stream is selected for entry into the working memory. Consequently, the model is able to delimit the time periods during which particular streams are paid attention to. Although the main ideas could be applied to all types of sound, the implementation of the model was targeted at environmental sound in particular. As an illustration, the model is used to reproduce results from a detailed field experiment on the perception of transportation noise. Finally, it is shown how this model could be a valuable tool, complementing auralization, in the design of outdoor soundscapes.
Acoustics Group, School of Architecture, University of Sheffield, Western Bank, Sheffield, UK
ABSTRACT
In order to thoroughly analyse aural environments and their impact on users, a multidisciplinary perspective concerning fields of acoustics, psychology, sociology, and architecture should be introduced. A widely accepted approach regarding soundscape analysis and acoustic comfort in enclosures has not been fully developed. Based on the classification of architectural spaces from the viewpoint of acoustic comfort, in this paper, soundscape evaluation in relation with objective acoustic indices within enclosures that have similar space dissolutions are presented. The primary concern is to examine how sound behaves under certain architectural conditions. Existing literature using certain temporal and spatial parameters for sound fields and speech intelligibility is firstly reviewed. Then some parameters specifically relating to soundscape and acoustic comfort are discussed, considering various architectural formations. It is also considered that, certain architectural elements and spatial organisations could have a range of influences regarding the formation of aural environments.
(1) HEAD acoustics GmbH, Herzogenrath, Germany (2) TU Berlin, Germany
ABSTRACT
A short term scientific mission initiated through Cost TD0804 was carried out for evaluation, measurement and analysis of soundscapes. Several young researchers were introduced in binaural measurement technology, enhanced sound analysis, evaluation techniques, procedures and had the opportunity to perform field and laboratory tests. For the course measurement technology and workstations were available and a team of experienced researchers trained and supported the young researchers. In collaboration with qualified soundscape researchers short case studies including measurement, analysis, evaluation, and classification of defined environmental areas was carried out. The teaching was based on educational modules of the different research steps. This education concept will be presented and discussed based on experiences of both the teachers and the young scientists.
Hanyang University, Seoul, Korea
ABSTRACT
In this study, urban soundscapes was evaluated through soundwalking. Soundwalking was conducted during ISO meeting in Seoul, thus international as well as domestic experts in the soundscape studies were participated. Subjects were asked to walk along a route with different characteristics of urban soundscape, and to concentrate on what they could hear as they walked and observed the urban environments. At each site, the subjects evaluated the overall impression and preference to context of soundscape. During the soundwalking, audio-visual recordings were carried out by using binaural microphone and video camera. From the results, the factors affecting the perception of urban soundscape were extracted, and the responses from domestic and international experts were compared.
Department of Geography and Resource Management, The Chinese University of Hong Kong, Hong Kong, P.R.China
ABSTRACT
In a review of the development of methodology in soundscape research in recent years, this paper identifies two apparent divergent approaches. On the one hand, there is an emergent trend to disaggregate the acoustic environment, by identifying the sound components, differentiating the wanted from the unwanted ones, discerning intrusive sounds from the ambient, and delineating different soundscape units in space and soundscape segments over time. While such an approach can help characterize the acoustic environment and elucidate its key attributes; there is, on the other hand, a call to return to the basics of the human soundscape experience which underscores totality of the acoustic environment, multi-modality of human senses and the recognition that "tuning", which transcends space and time, is not easily represented in simple metrics. The paper examines the apparent contradictions and complements of these approaches and explores the implications for soundscape evaluation and design.
School of Architecture, Tianjin University, P.R.China
ABSTRACT
Haihe river, as the mother river of Tianjin city, China, not only plays a vital role in social culture and psychology, but also provides a good place for people's leisure and entertainment. With the advance of landscape transformation in Haihe coastwise area, creating a comfortable acoustic environment becomes an important part of the overall transformation. In this study, the sound situation of Haihe coastwise area including sound type, relationship among different sounds, sound expectation and so on, was analyzed through physical measurement and social surveys. By doing that, the method of improving the acoustical environment of Haihe coastwise area and its overall soundscape design were hoped to be explored. Furthermore, a suitable expression method of soundscape design in open area like Haihe coastwise area was tried to be found in order to build a communication platform for acoustic researchers, urban planners and the general public.
(1) Centro de Investigaciones Acústicas y Luminotécnicas (CIAL), Universidad Nacional de Córdoba, Argentina (2) Grupo de Investigación en Instrumentación y Acústica Aplicada (I2A2), Universidad Politécnica de Madrid, Spain
ABSTRACT
This paper shows the influence of the semantic content of urban sounds in the subjective evaluation of outer spaces. The study is based on the analysis conducted in three neighboring and integrated urban spaces with a different form of social ownership in the city of Cordoba, Argentina. It shows that the type of sound source present at each site influence, by its semantic content, in the user´s identification and permanence in the place. The noise present in a soundscape is able to have a high semantic content, and therefore the sound has a particular meaning for the perceiver.
Every particular social group influences the production of their own sounds and how they perceive them. This allows to consider the sound as one of the factors that define the sense of "place" or "no place" of a certain urban space. Evidently the sounds, and their ability to evoke and characterize the environment, cannot be ignored in the construction and recovery of anthropological sites. This urban culture is unique and specific to every society. The public spaces, with their soundscape, are part of the construction of the urban identity of a city. It is shown that for identical general sound levels present in each of the spaces, the level of annoyance or discomfort, in relation to the subjective acoustic quality, is different. This is the result of the influence of semantic content of the sounds present in each urban space. Coinciding with other similar research, the level of discomfort or annoyance decreases as the presence of natural sounds such as water, the wind in the trees or the birds singing increases, even when the objective values of noise level of natural sounds are higher.
1) Technische Universitaet Berlin, Germany 2) Hanyang University, Seoul, Korea 3) HEAD acoustics, Germany
ABSTRACT
A kick off Soundscape workshop in Seoul for collaboration of a Korean and German team research took place at the time while the International Organization for Standardization "ISOTC43, SC1,SC2" met in Seoul and brought together national and international experts working on the standard for Soundscape evaluation procedures in Working Group ISOTC43SC1/WG54. Both organizers of the workshop are members of the WG 54, so for the workshop it was possible to bring together national and international experts in an evaluation process that looks for urban spaces and their design with respect to people's mind but also to using recording facilities like a binaural recording system and acoustic cameras. Beside theoretical and practical presentations and discussions highlight was set a 2 hours Soundwalk in Seoul, evaluating different urban spaces using explicitly Soundscape evaluation procedures . The workshop was setting a new mile stone in Soundscape research and collaboration of Soundscape research in different parts of the world. The meaning of sound, the perception of noise, the trend to design urban spaces not only with architectural know how but also with user's expertise were central in the evaluation and brought new insights for both parties. Results will be presented.
(1) GRECAU-Bx, ENSAP Bordeaux, Talence, France (2) CRESSON, ENSA Grenoble, Grenoble, France (3) INRETS, Bron, France
ABSTRACT
ASTUCE, "Ambiances Sonores, Transports Urbains, Coeur de ville et Environnement" (Sound Ambiences, Urban Transport, City centre and Environment), is a research project aiming to provide a relevant methodology to improve the environmental quality of city centres by considering the concept of soundscape. The project gathers researchers from two laboratories belonging to two higher national schools of Architecture (GRECAU-Bx and CRESSON) and a National Transport Research Institute. The way the researchers want to approach this topic is by integrating the sensitive character of urban sound ambiences and the city dwellers sound experience. The goal of the project is to develop a global approach that helps local authorities, decisions makers, urban planners and town designers in the decision making process. By collecting information about the urban sound environment, identifying those that satisfy the city dwellers' expectations and those that have to disappear or be modified, short- and long-term strategies will then be validated in complement of the noise action plans in line with the European Environmental Noise Directive. The paper deals with the first phase of the research project explaining how they brought together their own methodologies applied on two urban areas where several modes of transportation are available, among which the tramway and giving some results of the different surveys. A later phase will concern the elaboration of user instructions intended to the different actors listed above in order to consider the soundscape characteristics. After which they propose to work out a methodology that not only intends to avoid and abates noise pollution, but also to contribute to improve the environmental quality of the city centre. To assist urban design actors in this complex task the ASTUCE project aims to develop a guidebook which comprises recommendations on organisational settings, on the involvement of different societal stakeholders, including the public, and action planning process, in the future.
(1) Graduate Institute of Rural Planning, NCHU, Taiwan (2) Department of Landscape Architecture, NCYU, Taiwan (3) Department of Urban Planning, FCU, Taiwan
ABSTRACT
The translation of "soundscape" of the "landscape of sound". Soundscape is a synthesized word made of "sound" and "landscape". It was a concept developed my Murray Schafer, a Canadian composer, in the early 1970s. World Soundscape Project embarked on a series of investigations and researches. The focus of these explorations was extended from the traditional approach on "individual phonetics" to the overall "sound environment". This paper sets its research focus on the Thao Tribe in Sun Moon Village, Yuchi Township, Nantou County in central Taiwan. The focal point is placed on the soundscape and social/spatial culture in order to examine the relationship between the culture and soundscape of the Thao Tribe against the backdrop of a changing society.
University of Bradford, UK
ABSTRACT
The literature points to the importance of quiet areas, green spaces and natural surroundings in relieving stress and improving feelings of well being. Such tranquil landscapes and soundscapes are potentially well suited to the needs of citizens of metropolitan areas because the stress of everyday city life can often involve intense periods of directed attention' over many hours, leading to stress and mental fatigue. At the University of Bradford in the UK research has provided a unique engineering tool for predicting the perceived tranquillity of open spaces in towns, cities and countryside. The tool has initially been used to carry out a pilot tranquillity audit of 3 open spaces and has now been extended to all 4 major parks in a metropolitan area. The dominant noise source in each case results from traffic on roads close to the boundaries. The results provide useful insights into the levels of tranquillity that can be achieved in such urban conditions and the effects of moderating factors are discussed based on recent research results. Suggestions are made for improving the levels of tranquillity. The paper describes the results of the survey and discusses the trends found if the area of the parks and open spaces is considered.
Hanyang University, Seoul, Korea
ABSTRACT
In the present study, railway soundscape in rural areas was assessed by field measurements. Landscape merics of rural areas were analysed, then a total of 10 sites were chosen covering different composition of landscape metrics. Audio-visual recordings were carried out at selected sites; acoustical characteristics of train noises were analyzed in terms of sound quality metrics, ACF (auto correlation function) and IACF (interaural cross correlation function) parameters. It was found that noise levels of high speed trains ranged from around 70 and 90 dBA in terms of A-weighed equivalent noise levels. And it can be seen that IACC values of train noises were dependent on the layout of recorded sites, and perception of train noises were affected by background noise levels.
(1) Department of Interior Design, Nan Jeon Institute of Technology, Tainan, Taiwan (2) School of Architecture, University of Sheffield, Western Bank, Sheffield, UK
ABSTRACT
To examine the effects of cultural factors in sound evaluation, a comparative study was carried out between the UK and Taiwan, with six case study sites, three in Sheffield and three in Taipei, representing typical urban texture of residential areas. The study included a series of questionnaire surveys as well as noise measurements and simulation of the case study sites using noise-mapping software. The results reveal significant differences between the two cultures in a number of aspects, including choosing and evaluating living environment, noise noticeability, annoyance and sleep disturbance, activities, and sound preference, although it has been demonstrated that both in the UK and Taiwan, acoustic quality is an important consideration of the overall urban environment.
CAPS - IST, TU, Lisbon
ABSTRACT
The Virtusound system is a computer based platform for room acoustics simulation and binaural auralization in real time, which is being developed at the Technical University of Lisbon. The system is based on an accelerated mirror image source method in combination with a time-dependent radiosity method for the computation of the binaural room impulse responses. Virtusound uses different accelerating techniques and algorithmic improvements for real time simulation and auralization through binaural technology. The system is currently implemented on standard personal computer hardware.
A-Volute, Villeneuve d'Ascq, France
ABSTRACT
The cancellation of transaural acoustic crosstalk is an essential and critical feature of all virtual auditory displays based on HRTF (Head-Related Transfer Functions) when loudspeakers are used for listening. Generally, a satisfying crosstalk cancellation is achieved within an extremely small sweet spot. Reducing the angle between the loudspeakers with respect to the listener increases the controlled area in a high frequency range, especially on the front-back axis, but makes harder the cancellation and equalization at lower frequencies, the amount of energy required to achieve the cancellation in this frequency range being prohibitively high. The directivity of employed loudspeakers has a direct impact on the transaural acoustic crosstalk. The narrower the directivity is, the lower the crosstalk level should be. An alternative to the use of a signal processing step to cancel the crosstalk would be to use highly directional loudspeakers to physically reduce the levels of indirect path responses, i.e. from each loudspeaker to the corresponding contralateral ear, compared to those of direct ones, i.e. from each loudspeaker to the corresponding ipsilateral ear. Devices known as parametric arrays employ the nonlinearity of the air to create audible sound from inaudible ultrasound, resulting in an extremely directive, beamlike wide-band acoustical source. This paper investigates the potential use of a pair of parametric arrays for HRTF-based transaural applications.
(1) College of Art, Nihon University, Japan (2) Hitotsubashi University, Japan
ABSTRACT
Our purpose is to create real sound made by human motion in the virtual reality environment. On Japanese traditional dance that is called "Nihon Buyo", the sound of footsteps is very important because dancer makes musical beats by footsteps in his performance. We tried to generate these footsteps using dancers motion for the purpose of producing the virtual reality performance of Nihon Buyo. In the real environment, the material vibration creates sound. This is the fundamental principle of physical phenomena. If we give the something power to the material, it starts to vibrate; this is most simply phenomenon of sound. So we had to create the material vibration in the virtual environment. We tried to generate the footsteps sound by motion capture data. To create the real sound in the computer environment, we have to simulate material vibration by the excitation. If we can simulate the material vibration, we could create same environment as real world in the computer environment. To generate the virtual reality footsteps sound, we used the physical modeling that was the calculation of the modeling elasticity, and moreover we used Finite Element Method (FEM) to simulate the material vibration, which was wood floor vibration.
At first, we measured the floor vibration due to the foot of the dancers to estimate the vibration by the physical model as the vibration of the floor by the real human movement. We put on three places of contact type vibration sensor on the floor and measured it. To make the material vibration, off course, excitation is needed. For footsteps, excitation is just the motion of foot. On this modeling, we used Z-axis motion capture data of performers' heel as data of excitation. About the physical modeling, we made the DSP program that translated the movement of foot to excitation data, based on Modalys that IRCAM developed. And the footsteps sound was generated with dancers motion data and the elastic value of wood. As a result of estimation, it was suggested that there seemed to be the indication of correlation between the real vibration and modeling sound.
Faculty of Architecture, Design and Planning, University of Sydney, NSW, Australia
ABSTRACT
In the most demanding virtual auditory display applications, in which individualised Head Related Transfer Functions (HRTFs) are used for the presentation of virtual sound sources via headphones, there is controversy regarding how important it may be for individualised Headphone Transfer Function (HpTF) measurements to be used in equalizing the headphone response for each listener. In order to test what impact the use of such individualized HpTF-based correction might have on directional judgments, filtered noise bursts were presented with and without such headphone correction during a test of front/back hemifield discrimination for virtual sound sources positioned on six sagittal planes offset from the median plane by 15o, 30o, and 45o to either side. While perfect discrimination performance was observed given repeated two-interval forced choice discrimination trials in which a pair of short noise bursts were presented using individualised HRTFs, within-trial variation in the spectrum of the source submitted to HRTF-based processing made the task quite difficult, reducing performance to chance levels for 5 of the 16 listeners tested. For the remaining listeners who showed above-chance performance under all conditions tested, performance levels were well below the perfect performance that had been observed when the spectrum of the HRTF-processed source was held constant. Through inter-stimulus variation in source spectra, which functioned to remove the so-called "known-source-spectrum ceiling effect" associated with simple laboratory tests of virtual auditory display technology, it was possible to show that front/back discrimination performance was clearly affected when sources were processed using headphone correction filters that were based upon a each individual's measured HpTF.
National Institute of Information and Communications Technology, Kyoto, Japan
ABSTRACT
If the same sound pressure as when a listener were listening to the sound without headphones could be reproduced at the eardrum, he/she would perceive three-dimensional sound even when the sound is presented through headphones. Headphone calibration is therefore required to compensate for individual variations in the transfer function of a listener's ear canal with and without headphones. From a practical point of view, a headphone calibration function applicable to many listeners would be attractive. We measured headphone calibration functions for 245 listeners, and from these data derived the mean calibration function. Its effects on the subjective impression of spatial features of the reproduced sound were tested through listening tests. Participants compared a set of virtual three-dimensional sounds reproduced with different calibration functions: the mean calibration function and those which were measured using various ears such as an artificial ear (B&K 4153), a head and torso simulator (B&K 4128) and the listener's own ear. Sound stimuli were presented in pairs, and evaluated by participants in terms of diffuseness, displacement of the virtual sound image from the real loudspeakers used in the recording, and externalization. This paper will first briefly describe the theoretical backgrounds on the necessity of headphone calibration for reproduction of sound pressure at the eardrum through electrical equivalent circuit models; it will then show the setup of experiments, and present the results. Statistical analyses of the results revealed that the mean headphone calibration function works well in terms of diffuseness. Variations in the headphone calibration functions among listeners will also be discussed. It seems that characteristics of the headphone calibration functions common to many listeners can be markedly seen in the frequency region approximately from 5 to 12 kHz. Above this region, substantial individual variations were observed.
Institute of Technical Acoustics, RWTH Aachen University, Germany
ABSTRACT
The quality of present-day room acoustic simulations stands and falls by the quality of the underlying CAD room models. A "high-quality" room model does not implicate that it has to be highly detailed with a lot of small objects and ornamentation. High accuracy of a model and its auralization is only achieved when basic acoustic principles are regarded. This means for geometrically based simulations that we have to handle wavelengths from 1.7cm till 17m with regard to their reflection/scattering pattern at room or object surfaces. As this spans the dimensions of the majority of known objects, walls etc., only an active room model that changes its level of detail accordingly to the incident sound wave frequency can ensure correct results, e.g. for low-frequency specular reflections. When it comes to real-time auralizations, which are used in sophisticated virtual reality systems, another emerging aspect is how to deal with very complex room geometries in limited computation time. Luckily, using active frequency dependent geometry helps a lot. This is due to much faster simulations when simple geometries with low polygon count are used. These simpler levels are even used for lower frequencies which typically travel much longer than higher ones. Thus they produce the majority of computation load which is now significantly reduced.
Going on, the introduction of a temporal discretization which reduces room model details step by step over the duration of the room impulse response can save valuable computation resources as well. This technique makes use of perceptual characteristics of the human ear, where fine structures in the late part of the impulse response cannot be distinguished. Furthermore, most of the energy in this late part originates from diffuse reflections for which the exact geometry does not matter. Thus, the active geometry model switches to simpler structures for late reflections. In this contribution, the newly developed active geometry model, which uses a frequency and temporal dependent level of detail, will be presented as well as results from comparative listening tests. These could point out the necessary complexity for the highest detail step as well as the maximally allowed simplification based on human perception.
Institute of Technical Acoustics, RWTH Aachen University, Germany
ABSTRACT
Over the last decades Virtual Reality (VR) technology has emerged to be a powerful tool for a wide variety of applications such as rapid prototyping, evaluation, therapy, or training tasks. For high quality auralizations (in analogy to visualization) of virtual environments, methods of Geometrical Acoustics (GA) are mostly applied to simulate the propagation of sound inside enclosures. By adapting acceleration algorithms such as BSP- and Octrees, current implementations can manage the computational load of moving sound sources around a moving receiver in real-time -- even for complex scenarios. However, insertion, modification and extraction of geometrical objects are basic operations in many real-world experiences, but hierarchical spatial data structures do not support them efficiently. For this purpose the concept of Spatial Hashing was introduced, which is usually applied to collision detection tests of deformable objects in Computer Graphics. This contribution describes the design, implementation and integration of a dynamic object controller in the real-time room acoustics simulation software RAVEN. By adapting the concept of Spatial Hashing to the simulation algorithms, RAVEN is able to handle geometry modifications in real-time. The performance of the newly implemented data handling- and simulation routines is briefly discussed and compared to that of Brute Force and BSP-based algorithms.
Faculty of Architecture, Design and Planning, University of Sydney, NSW, Australia
ABSTRACT
The ability of listeners to identify which of two simultaneously presented stimuli exhibited a sudden increase in periodic pitch modulation (vibrato) was examined in a divided attention task. While it is well established that stream segregation is influenced by both timbral differences and spatial separation between simultaneously presented stimuli, the interaction between these two salient factors had not been systematically investigated for such a task. The results of the study reported here showed that the facilitation of identification performance due to spatial separation of sources differing in timbre was greatest when those stimuli differed least in timbre. In particular, when two simultaneously presented sounds had very distinct vowel coloration, spatial separation of the two did not improve the ease of identifying which of the two sources exhibited a sudden increase in periodic pitch modulation. By quantifying vowel coloration differences for pairs of stimuli in terms of the Euclidean distance between their first two formant frequencies, it was shown that identification performance for stimuli exhibiting the most similar vowel coloration was most affected by spatial separation. In all cases tested, however, identification performance for spatially separated stimuli was superior to that for co-located stimuli. The results reveal the relative salience of these two prominent factors, spatial separation and timbral difference, in determining the effectiveness of concurrent auditory information displays.
RWTH Aachen University, Germany
ABSTRACT
The first round robin on room acoustics computer was presented at ICA Trondheim in 1995. The results showed that scattering effects are crucial, and accordingly they were implemented in later software versions. Since then there was significant progress in prediction and simulation tools in architectural acoustics. Ray-, Beam- and Cone-Tracing hybrid models of geometrical acoustics can be found in several programmes, and these deliver user-friendly results in color-mapping and in auralizations. Also Finite Element models or other numerical methods are used in solving indoor problems in reasonable time. The question, however, is if we can trust these results without doubt.
The reliability of results from such computer tools depends at least partly on the quality of the numerical solver for geometrical or wave-based models. Also relevant is the quality of input data such as geometry or boundary conditions and, of course, the skills of the operator. This presentation summarizes the state of the art in computer simulations, and it focuses on sources of uncertainties in computer models, on actual status in solving indoor acoustic problems and on approaches for quantitative error propagation of uncertainties of input data.
Institute of Acoustics, Chinese Academy of Sciences, Beijing, China
ABSTRACT
Head-related transfer function (HRTF) describes a transfer function from the sound source to the listener's ears and plays a central role in binaural spatial and virtual hearing studies. Measuring HRTF requires rigorous experimental conditions and specially designed equipments, and the procedure becomes very time consuming and tiring for the participants. In this paper a fast HRTF measurement method is presented. By multi-point simultaneous measurement using a loudspeaker array, rigorous acoustical conditions and special equipments are not required and the needed HRTFs of a subject are rapidly measured as well as its head and position information. Quality of the measured HRTF is also evaluated. Experiments in an ordinary room demonstrated its effectiveness.
(1) Wakayama University, Wakayama, Japan (2) Ritsumeikan University, Kusatsu, Kyoto, Japan
ABSTRACT
A new spectral estimation method which improves processed sound quality of STRAIGHT, a speech analysis, modification and re-synthesis framework widely used for high-quality speech and singing manipulations, is proposed. Application of the proposed method to TANDEM-STRAIGHT, a completely reformulated version of STRAIGHT, yielded the best spectral envelope approximation among conventional methods such as LPC, cepstrum and legacy-STRAIGHT. TANDEM-STRAIGHT consists of two parts, a temporarily stable power spectrum estimation method of periodic signals (TANDEM) and a spectral envelope calculation method based on consistent sampling theory. The proposed method uses F0-adaptive smoothing and compensation of logarithmic power spectrum, for improving approximation accuracy of spectral peaks, which effects on the quality of re-synthesized sound. A series of simulations was conducted to optimize internal parameters of the proposed method. The optimized system was evaluated and compared with conventional methods using stylized spectra and simulated speech spectra. The evaluation was based on a spectral distance measure proposed by Itakura and Saitou with modification to perceptually relevant ERB-N number frequency axis. The following set of spectra were used. Power spectra calculated from vocal tract area functions measured using MRI data with LF-model excitation spectra were used as the grand truth and spectral distances between this target and the estimated spectra were evaluated. A set of periodic pulse train was used for excitation signal in this case. These evaluation results indicated that the proposed method yields the smallest spectrum distance among conventional methods such as LPC, cepstrum and legacy-STRAIGHT.
(1) Institute of Biomedical Engineering, National Yang-Ming University, Taiwan (2) Institute of Speech and Hearing Disorders and Sciences, National Taipei College of Nursing, Taiwan (3) Department of Otorhinolaryngology, Head & Neck Surgery, The Chinese University of Hong Kong, Hong Kong, P.R.China
ABSTRACT
In Mandarin Chinese, a tonal language, the pitch pattern of syllables conveys lexical meaning. There are four tone patterns (Tone 1 through Tone 4) defined by the pitch pattern, or the fundamental frequency pattern. Each of them has very unique acoustic characteristics, including fundamental frequency, amplitude, duration, tempo, pausing, distribution of energy in the frequency spectrum, and formant location. Emotional tone of voice is defined as a vocal expression of emotion conveying affective information. Each of them also has very unique acoustic characteristics. The acoustic characteristics of Mandarin tones change the listener's perception of the meaning of an utterance. Emotional tone of voice is alike in perception, when acoustic characteristics of the emotional tone of voice changes, the perception of the listener may vary. Therefore, the purpose of this study is to investigate the variation of the acoustic characteristics of the four tones across different emotional tones of voice in Mandarin.
Study Design: 11 male subjects and 11 female subjects (mean age: 26) participated in this study. All subjects were native Mandarin speakers, had no history of any speech and hearing disorder, and passed articulation and voice assessment successfully. All speech samples designed by the author were used in this study. There were a total of forty speech samples: two phonetic contexts (/ti/ and /tu/), four lexical tones, and five different emotional tones of voice (angry, sad, fear, neutral, and happy tones of voice). Acoustic analysis of each target syllable included the mean F0, the mean amplitude, and the duration of the target syllables. The mean F0 and the mean amplitude for four tones of the target syllables are significant different across five different emotional tones of voice, but the duration for four tones of the target syllables of neutral tone of voice are similar with those of the other four emotional tones of voice. This is a preliminary study to investigate the variation of the acoustic characteristics of the four tones across different emotional tones of voice in Mandarin, and the furthering investigation is needed.
University of Sciences and Technology Houari Boumediene (USTHB), FEI LCPTS, Algeria
ABSTRACT
We have developed here, an estimate method of the non stationary formants frequencies, based on the instantaneous frequency, obtained by continuous Morlet wavelet transform. We propose an algorithm which allows the automatic determination of the temporal variations of the formants of orders being able to go up to four or five (what is often difficult to realize). The advantage of this algorithm is that it applies directly to an acoustic signal without preliminary pre-processing. Tests were carried out on phonemes, words and sentences containing all type of sounds. The results obtained on synthetic signals (for validation of the method) and real signals were represented on a spectrogram and were compared with those obtained by traditional methods.
USTHB FEI Algiers, Algeria
ABSTRACT
The robotic character of the synthesis voice which is produced by the speaking machines is one of major concerns of the researchers in the field of the modelling of the speech production. The most touched models are especially those which are of acoustic origin. The Klatt synthesizer (acoustic and data processing model of speech production), is very appreciated for its simplicity of implementation. However, the speech synthesis signal, witch it gets is metallic. This metal character is partly related to the melody of the synthetic signal which is produced by a very simplistic model of source. We propose in this study, a methodology of establishment of an acoustic electric analogue of the two masses model (mechanical model of the vocal cords), to produce the glottis wave of flow in the place of the source model of Klatt. The results obtained testify indeed to a very appreciable reduction in the robotic character of the synthetic signal.
(1) Gipsa-lab, UMR CNRS 5216, Grenoble Universities, France (2) Cybermedia Center, Osaka University, Japan (3) The Center for Advanced Medical Engineering and Informatics, Osaka University, Japan (4) National Institute of Information and Communications Technology, Japan
ABSTRACT
The spatial development of steady flow through a constricted rectangular nozzle is characterised. The constriction consists of an obstacle in the shape of a trapezoid wedge, which is inserted perpendicular to the main flow direction. The channel exit is situated downstream the obstacle at 1.5 times the minimum aperture. The constriction degree is fixed to 70% and the aspect ratio is 4 in the unconstricted and 15 in the constricted portion of the channel so that the flow is considered two-dimensional. The imposed bulk Reynolds number yields 4000. The flow through the channel is computed by Large Eddy Simulation. In addition, the flow through the nozzle downstream the obstacle up to 7 times the minimum aperture is experimentally assessed by smoke visualisation. As a consequence of the geometrical asymmetry in the nozzle design, important asymmetries in mean as well as shear flow development are pointed out. Despite severe simplifications, the studied nozzle geometry and flow conditions are relevant to human fricative production.
Moscow State Lomonosov University, Russia
ABSTRACT
The MRI-based contrastive investigation of Russian nasal and oral hard and soft labial consonants' articulations was carried out upon original technology elaborated for real-time MRI visualization of the speech articulation dynamics. A crucial distinctive role of the velum configurations has been proved for this type of Russian consonantal production. Experimental data presented in our data set have a notable degree of articulatory contours' matching for each type of hard and soft consonants under investigation irrespective of most vocalic contexts. We suppose that the observed motor stereotyping presumably resulted from a phoneme's inherent properties and far less from a specific phonetic context. Main differences between palatalized and non-palatalized articulatory patterns of the experimental consonants were also depicted and some coarticulation constraints resultant from compatibility of various elements in CV clusters propounded. We suppose this differences being the main reason for strong phonotactical constraints observed in the modern Russian language pronunciation practice. The Russian standard pronunciation dictionaries contain only very few words with the hard labial nasal consonant [m] or bi-labial stop consonant [b] preceding vowel [e] within the single syllable: [mer] (city mayor), [mejnstr'ím] (mainstream), [ber] (rem), [bekvakál] (back-vokal). It is worth mentioning that most of these words are recently adopted into Russian, or form a minimal distinctive pair (as for "city mayor"). In the absolute majority of other foreign adoptions of frequent occurrence in Russian a palatalized consonant phoneme within a CV cluster [m] + [e] or [b] + [e] is recommended as a standard pronunciation: [m'éd'ium] (medium), [b'eZ] (beige), [b'ekón] (bacon), etc.
Ritsumeikan University, Kusatsu, Kyoto, Japan
ABSTRACT
Singing synthesis applications have recently been developed to create natural singing voices. Applications such as VOCALOID can synthesize singing voices by inputting lyrics and scores, and musical expressions such as vibrato and portamento are added to the singing voices by controlling pitches. This control is crucial to synthesize natural singing voices, but users are required to edit the pitch contour precisely, which means that the sound quality depends on the skill of the user. If the singing voices were controlled by using professional singers' musical expressions, users could synthesize natural singing voices more easily.
We created a singing database composed of several singers' vibrato and portamento and then extracted their musical expressions and analyzed the vibrato to synthesize natural singing voices. The database consists of four professional singers (two female and two male) who sang the five Japanese vowels. The vibrato pitches were set to thirteen pitches of an octave that each singer could sing. This database contains not only the singers' own singing voices but also imitations of other singers. We used the database to analyze the differences of vibrato between professional singers and to determine how to control vibrato by impersonation. In a previous study, a vibrato model in the form of a fundamental frequency (F0) contour was proposed to synthesize natural singing voices. This model can control the vibrato rate (speed of pitch fluctuation) and vibrato extent (width of pitch fluctuation). Vibrato sections are extracted and vibrato features are calculated from the F0 contour of singing voices and defined as average in the vibrato section. It was reported that these vibrato features varied according to the type of song. Song type was the only focus of this study, whereas in our study, we focus on the difference of vibrato based on the singer. We propose using vibrato duration (the ratio between vibrato length and voice length) and time fluctuations of the vibrato rate and vibrato extent to analyze professional singers' vibrato. The time fluctuation of the vibrato rate is calculated from period of F0 contour, and the time fluctuation of the vibrato extent is calculated from the instantaneous amplitude of the F0 contour. Results showed the different vibrato features of the professional singers and also indicated that when a singer imitates another singer, the vibrato features are consciously controlled.
Shizuoka University, Japan
ABSTRACT
We found that the sound is radiated from the nostrils during pronounce of the buzz of voiced consonants. However, how to produce voice consonants is not known well. Therefore, movies on mid-sagittal plane of the head were made using f-MRI. Speech materials are /cVcV/, where /c/ is one of voiced consonants /b/, /d/ and /g/, and V is one of 5 vowels. There are 3 male speakers. The frame period is 16.7 milli-second. The length whose velum touches the back vocal tract wall, and the distance between the upper part of the velum and the back vocal tract wall were measured on every frame by visual judgment. At the beginning of the beginning buzz of a word, the velum does not touch the back vocal tract and has a 1mm distance from the back vocal tract. But for medial buzz of a word, the velum completely contacts the back vocal tract. It is the same as for vowels. For simulation of this phenomenon, we tried the analysis using the electrical equivalent circuit mode of the tube with loss. From the result, it is seen that at the beginning of the beginning buzz of a word, if the distance between the velum and the back vocal tract is 1mm, the sound can be radiated from the nostrils and is similar to one for the nasal consonants.
Ritsumeikan University, Kusatsu, Kyoto, Japan.
ABSTRACT
It has been increasing that the demands for technology to synthesize the speech signal with high quality. Conventional technologies to synthesize the speech signal have been developed as voice coding technology for high efficient telecommunication because the conventional telecommunication infrastructure had been immature. Voice coder (vocoder) has been proposed to overcome this problem. Vocoder analyzes speech signal, estimates some speech parameters and synthesizes the speech signal based on the estimated speech parameters. The speech parameters estimated by the vocoder are compressed for transmission. Therefore, the conventional vocoder stresses the compression rate, whereas the sound quality of the synthesized speech signal was not stressed for the purpose. Recently, the sound quality of the synthesized speech has been stressed because the telecommunication infrastructure has got mature enough. Vocoder has been developed for synthesizing the natural speech signal purpose A high quality vocoder, called STRAIGHT, has attracted attention recently. Although STRAIGHT can control fundamental frequency (pitch) and spectral envelope (tone color) independently without the degradation of the sound quality. STRAIGHT requires large quantity of computation. Therefore, the implementation of a real-time system based on STRAIGHT is difficult. In this paper, a new vocoder framework based on STRAIGHT is proposed to process the speech signal in real time. Conventional STRAIGHT synthesizes speech signal based on only estimated speech parameters. The proposed framework uses not only the estimated speech parameters but also the waveform to improve the sound quality. The proposed framework can achieve the reduction of the computational cost in STRAIGHT because the proposed method can save a part of the estimation in STRAIGHT. Subjective experiments and objective experiments were conducted to evaluate proposed method. Elapsed time is used in the objective evaluation. MOS evaluation is used in the subjective evaluation. As a result, the proposed method was rapid enough to achieve the real-time application. Furthermore, the sound quality of the speech signal synthesized by the proposed method is same as STRAIGHT. Therefore, the proposed vocoder achieved the high quality as much as conventional STRAIGHT and processes the speech signal in real time.
Graduate School of Science and Technology, Kumamoto University, Japan
ABSTRACT
This paper describes development of an articulatory speech synthesis simulation system using a transmission line model. A speech synthesis method that simulates speech production process has a potential to produce human-like speech. However, there are many parameters to be set such as vocal tract area functions and timing between articulatory movement and vocal fold vibration. A simulation system that allows us to evaluate the effects of related parameters on synthesized speech provides us with useful information on adequate values for the related parameters. In this paper, therefore, an articulatory speech synthesis simulation system with graphical user interface (GUI) has been developed to overcome the difficulty in setting control parameters.
The system is based on the speech synthesizer proposed by Sondhi and Schroeter. In the synthesizer, the two-mass model of the vocal folds proposed by Ishizaka and Flanagan is used to produce the glottal source. This paper focuses on producing of stop consonants such as /p/, /t/ and /k/. The GUI-based simulation system has been developed in Java language in order to investigate relative timing between articulation and vocal fold vibration. First, area functions for /p/, /t/ and /k/ were studied by evaluating transfer functions for acoustic tubes as vocal tract shapes, because the difference of frequency range of energy distribution among these consonants is one of important cues. Secondly, relative timing between the events of articulation and vocal fold vibration was studied by evaluating its effects on the synthesized speech from the point of view of voice onset time (VOT).
The simulation results showed adequate ranges between both the events for producing successful consonant sounds /p/, /t/ and /k/. Moreover, these results allow us to utilize the timing information for improving the simulation system having an automatic adjustment function for setting related parameters.
Gipsa-lab, CNRS UMR 5216, Grenoble University, France
ABSTRACT
Voiced sounds production involves interactions between an airflow coming from the lungs, elastic structures which form the vocal folds and acoustical resonators (lungs and oral and/or nasal tract). The study of the physics of these phenomena is of great importance, not only in order to improve our knowledge about these complex interactions, but also in order to model them, in particular for the purpose of bio-medical applications. In this paper we assess some physical phenomena which, although widely recognized as being essential from a production or a perceptive point of view as well as when considering some pathological cases, are poorly described or even completely overlooked in earlier studies. These phenomena are associated with the closure of the glottis, the collision of the vocal folds, the presence of an asymmetry either of anatomical or biomechanical nature and the complex inhomogeneous structure of the vocal folds. To study and to validate the theoretical models a specific experimental set-up has been built. It consists of a large pressure reservoir(representing the lungs) to which is connected a replica of the trachea, of the vocal folds and of the vocal tract. In order to mimic the internal structure of the human vocal folds, the self-oscillating replica is made of successive layers of material (such as water, silicone, latex) each having its own mechanical properties. Asymmetrical configurations can be easily reproduced experimentally by either changing the geometry or by changing the internal structure of one fold of the replica. In addition to the acoustical pressure, the pressure upstream and downstream of the replica is measured as well as, using a laser device, the vibratory motion of the replica. Several theoretical models of increasing complexity (starting with the well-known two-mass model of Ishizaka and Flanagan) to account for these effects will be presented and compared with the experimental data. Of particular interest are the oscillation threshold pressure and the fundamental frequency of vibration, which can both be extracted from a dynamic analysis of the theoretical equations. Some simulation results, including sound samples, will be presented to illustrate the effects of each phenomenon.
(1) Federal Institute of Education, Science and Technoloy of Paraíba, Brazil (2) Federal University of Campina Grande, Paraíba, Brazil
ABSTRACT
There are several diseases that affect the human voice quality which can be organic or neurological. Acoustic analysis of voice features can be used as a complementary and noninvasive tool for the diagnosis of laryngeal pathologies. The degree of reliability and effectiveness of the discriminating process depends on the appropriate acoustic feature extraction. This work presents a parametric method based on cepstral features to discriminate pathological voices of speakers affected by vocal fold edema and paralysis from healthy voices. Cepstral, weighted cepstral, delta cepstral, and weighted delta cepstral coefficients are obtained from speech signals. A Vector Quantization is carried out individually for each feature in the classification process, associated with a distortion measurement. The goal is to evaluate a performance of a classifier based on the individual and combined cepstral features. The average, the product and the weighted average are the different combination strategies applied yielding a multiple classifier that is more efficient than each individual technique. To assess the accuracy of the system, 153 speech files of sustained vowel /a/ (53 healthy, 44 vocal fold edema and 56 paralysis) of the Disordered Voice Database from Massachusetts Eye and Ear Infirmary (MEEI) are used. Results show that the employed parameters are complementary and they can be used to detect vocal disorders caused by the presence of vocal fold pathologies.
(1) University of New Brunswick, Canada (2) Central University of Las Villas, Cuba
ABSTRACT
This work is focused on modeling the perception of tremor found in pathological voices. The main research objective is to automatically separate the different sources of tremor to estimate the magnitude of tremor perturbations using signal processing techniques. A new assessment algorithm is derived from recorded speech that combines non-linear filtering, amplitude demodulation and spectral estimation techniques. The algorithm is able to separate tremor sources originated in the glottal area from the vocal tract combining both sources to develop an objective acoustic measurement of tremor perturbations. The algorithm is evaluated against the perceptual judgments provided by speech pathologists and other reported indexes with excellent performance. It is shown the benefit of estimating independent sources of tremor to differentiate normal from pathological tremor and to model the perception of tremor perturbations.
(1) School of Information Technology and Engineering, University of Ottawa, Canada (2) Audiology and Speech-Language Pathology Program, University of Ottawa, Canada
ABSTRACT
Measures of amplitude and frequency perturbations in the fundamental frequency (F0) of speech, known as shimmer and jitter respectively, are commonly used to assess speech pathology and voice quality. One limitation of these measures is that they are not based on auditory processing. Shimmer estimation, in particular, could benefit from the incorporation of auditory processing because the outputs of the peripheral auditory filters arranged along the tonotopic axis have very different amplitude modulation profiles at the fundamental periodicity. In this study, we compared the amplitude modulations in the brainstem response evoked by a natural vowel stimulus in seven normal hearing subjects to the shimmer in the broadband stimulus and in the stimulus filtered around each of the first four formants (F1 - F4). The correlation coefficients between the amplitude contour derived from the grand-averaged evoked response and amplitude contours derived from the broadband speech signal and the signal filtered around F1, F2, F3, and F4 were 0.66, 0.35, 0.65, 0.81, and 0.80 respectively. On the other hand, the stimulus amplitude contour variance (a measure of the power of amplitude perturbations) was 20.4, 8.4, 10.1, and 3.8 dB for the unfiltered signal and the signal filtered around F1, F2, and F3 respectively relative to the variance of the amplitude contour of the sig-nal filtered around F4. Therefore, strong correlations with the amplitude contour of the evoked response were ob-tained for the speech signal filtered around F3 and F4 in spite of having smaller amplitude perturbations compared to the broadband signal and the signal filtered around F1 and F2. This result suggests that shimmer calculated in broad-band speech may not be the best measure of perceptually and physiologically relevant amplitude perturbations, and therefore indicates the need for representations that characterize shimmer separately in the different frequency regions of speech.
(1) Laboratoire de Psychologie et NeuroCognition, Université Pierre Mendès, France (2) Institut Universitaire de France (3) GIPSA-Lab, Dpt. Parole et Cognition, France
ABSTRACT
Seeing the speaker's articulatory gestures enhances phoneme identification in noisy environments, or when visual and auditory information are in conflict. This study investigates the role of visual information in lexical access. Previous research showed that the auditory presentation of a monosyllabic prime facilitated the subsequent processing of a written dissyllabic target word. These findings suggest that the presentation of an auditory prime activated the lexical representation of the subsequent target word.
The goal of our study was to determine whether visual speech may also activate lexical representations. In a phonological priming paradigm, the participants had to perform a lexical decision task on auditory disyllabic target words / pseudowords. The target was always preceded by a monosyllabic prime which could be displayed in audio-visual (AV), auditory (A), or visual only (V) modalities. The prime could overlap with the two initial phonemes of the target word / pseudoword (Related condition) or not (Non Related condition). The analyses on target words indicated shorter latencies for the Related compared with the Non Related condition. Interestingly, the facilitatory priming effect was present for the AV, A and also for the V primes. However, no facilitatory priming effect was observed for pseudowords. Data relative to word latencies indicate that seeing the articulatory movements of a speaker facilitates the subsequent processing of an auditory word. It suggests that visual speech influences word recognition process even if the acoustic is clear and congruent with the visual information. To determine the locus (pre-lexical or lexical) of this facilitatory priming effect, another study -using a phonological priming paradigm with a shadowing task- is in progress.
(1) Japan Advanced Institute of Science and Technology, Ishikawa, Japan (2) Konan University, Kobe, Japan
ABSTRACT
Speech conveys not only linguistic but also non-linguistic information. With non-linguistic contents, emotion plays an important role for speech communication. Many researchers reported that F0 contours conveyed much emotional information in listening tests. However, they did not investigate brain activities elicited by emotions produced by the F0 contours. Brain activity has been measured through recently-developed techniques (e.g., fMRI). Psychologists and neurologists are reporting results of measurements of brain activity elicited by emotional voices. However, these reports do not consider how acoustic features affect brain activities. This paper investigates relationships between the results of listening tests and measured results of brain activity affected by the F0 contours. The brain activities of subjects were measured while they listened to six stimulative sounds /eh/ with different F0 contours. One of the six stimuli was the original voice resynthesized using STRAIGHT without modifying any acoustic features (S0), and the others were synthesized by modifying F0 contours (S1 - S5). The slopes of the F0 contours in S1 and S2 were downward, and those in S3, S4, and S5 were upward.
A psychoacoustic experiment was carried out to investigate what emotions were perceived in each stimulus (S0 - S5). Dominant emotional words for each stimulus were affirmation and sympathy (S0), affirmation and calm (S1), disappointment and sadness (S2), asking again and surprise (S3), doubt and negation (S4), and doubt and surprise (S5). For brain activity measurement, we analyzed the differences of brain activities elicited by the original voice and that of the other five synthesized voices. Results show that in S1 minus S0 the dominant regions were frontomarginal gyrus and superior frontal gyrus belonging to the cerebral cortex, and in S2 minus S0 they were superior parietal lobule and angular gyrus belonging to the cerebral cortex, and in S3 minus S0, S4 minus S0, and S5 minus S0 they were caudate nucleus or putamen belonging to the basal ganglia. These results suggest that S1 and S2 with downward slopes contain social feelings and elicit portions of the cerebral cortex, and S3, S4, and S5 with upward slopes affect the attention system and elicit portions of the basal ganglia. We could perceive the difference of emotions by processing various F0 contours over the hierarchy of brain regions.
(1) Department of Information Media Technology, Tokai University, Tokyo, Japan (2) Department of Information and Communication Sciences, Sophia University, Tokyo, Japan (3) TOA Corporation, Kobe, Japan
ABSTRACT
Speech production is often modified when we speak in an environment with noise (known as Lombard effect), and several studies have shown that speech spoken in noise was more intelligible than in quiet environment. Our goal is to provide intelligible speech announcements at noisy and/or reverberant public spaces, such as, train stations. Therefore, we are interested in how speech sounds are produced and what aspects of speech sounds make it intelligible in reverberation as well as in noise.
The present paper studied whether speech spoken in noise or reverberation was more intelligible than speech spoken in quiet. The noise condition was used to compare the results with the previous studies. For the reverberation condition, we further tested how the results were influenced whether recording and listening stages were same or different.
Words in a carrier sentence spoken by two native speakers of Japanese were recorded in a sound-proof room. The recording conditions were quiet (Q), noise (N), and reverberation (R1: reverberation time of 3.6 s), where the noise or reverberation were provided to the speakers through their headphones. Listening tests were conducted with 32 young native speakers of Japanese in noisy and reverberant conditions. For the noise condition, white noise was added to the speech spoken in quiet (Q-N: the recording was in quiet and the listening test was conducted in noise, the following abbreviations are made in the same way) and noise (N-N) at a signal-to-noise ratio of -2 dB. For the reverberant condition, two impulse responses for different reverberation times, R1 and R2 (reverberation time of 2.6 s), were convolved with the speech spoken in quiet (Q-R1, Q-R2) and reverberation (R1-R1, R1-R2). The results showed that N-N and R1-R1 had significantly higher word identification scores than Q-N and Q-R1, respectively. This indicated that speech spoken not only in noise but in reverberation was more intelligible than in quiet environment. The results also showed that R1-R2 had significantly higher word identification scores than Q-R2, indicating that reverberation-induced speech was more intelligible than speech spoken in quiet, regardless of whether speakers and listeners were in the same or different reverberant situations. The results further showed that modification of the pre-target phrase which reduces reverberation masking rather than modification of the target word contributes to the improvements in speech intelligibility in reverberation.
Aichi Prefectural University, Aichi-gun, Japan
ABSTRACT
In the public transportaion such as subway system, the announcement that tells passengers about the next stop and the side of opening door is important and necessary to be easily and clearly heard. However, normally it is so noisy inside of the subway that makes difficulty for passengers to hear the announcement. A preceding study (Obayashi, et.)about above mentioned case had been done, and it was found out that high frequency bands of the announcement were cut off. A method that extends the frequency of the voice was proposed there. However, it was difficult to get good results for some consonants with the proposed method. Since there is at least one consonant in each station name, it is very important to perceive consonat. Therefore, in this study, we investigate a method to improve perceice level of voices, especially those consonants with voice emphasis. First of all, we compare the consonants obtained the announcement taken inside a subway by analyzing their power spectra. As a result, we found out the reason why it was not able to get good result for the preceding study. Then we set up a hypothesis "emphasizing the whole band of phoneme is the key point of effectiveness" . A hearing experiment for testing the hypothesis was done. The experiment has done from the following two aspects: to observe the variations of clearness and the quality of obtained voice amplitudes. The experiment was carried out by letting each person under tested hear the obtained noisy voices and ask them what they heard. From the experiment, the imporved voice recognition has been obtained when comparing with the preceding studies. Especially, the recognition of consonant imporved about 10% under keeping the voice quality clearly. It is concluded that the proposed method in this study is effective to imporve the consonant hearing in noisy environment.
National Institute of Advanced Industrial Science and Technology (AIST), Japan
ABSTRACT
Human listeners can perceive speech signals from a voice-modulated ultrasonic carrier which is presented through a bone-conduction stimulator (bone-conducted ultrasound, BCU), even if they are sensorineural hearing loss patients. As an application of this phenomenon, we have been developing a bone-conducted ultrasonic hearing aid (BCUHA). The performance of BCUHA has been evaluated considering syllable articulation and word intelligibility. These studies showed that the syllable articulation scores when using BCU were over than 60%, and the word intelligibility scores for words with high familiarity were over than 85%. The patterns of confusion in speech perception in the case of BCU have many points of similarities with those for air-conduction (AC). Although performance of BCUHA regarding perception of segmental units were evaluated, performance of perception of suprasegmental or prosodic units have not been evaluated. Japanese is a pitch accent language. Many words are contrasted by its tonal features, for example, "a'ka" (red) vs. "aka" (dirt). Also Japanese is a mora-timed language. All vowels and some consonants are contrasted by its lengths, like "obasan" (aunt) vs. "oba:san" (grandma), "kita" (came) vs. "kit:a" (cut). Since prosodic units of Japanese function as phonemes as described above, thus the perception of prosodic elements plays important role for Japanese. The purpose of this study is to evaluate the performance of perception of prosodic phoneme through BCUHA.
A series of experiments which consist of minimal pairs judgement task was conducted. The minimal pairs were differentiated by prosodic elements. A pair of logatomes "etete" and "ete:te" was selected for long/short vowel discrimination task, "etete" and "etette" were for single/geminate consonant, and a pair of real words "a'ka" and "aka" was for pitch accent. Stimuli were speech sounds continua which manipulated its prosodic element. Experiments were designed as single-stimulus two-alternative forced-choice identification tasks. Ten native Japanese listeners participated in the experiments. To examine whether difference between normal air-conducted hearing (AC) and BCU hearing regarding perception pattern of prosodic elements was observed, the same tasks were conducted in AC and BCU condition. From results of the experiments, the position and the sharpness of categorical boundaries were computed by using logistic regression analyses, and then examined whether significant differences were observed by using t-tests. From analysis of t-tests, significant difference between AC and BCU conditions was not observed in all tasks. This result indicates that the BCUHA can effectively transmit prosodic phoneme as well as segmental element.
Speech/Language Information Reasearch Center, Electronics and Telecommunications Research Institute, Daejeon, Korea
ABSTRACT
In this paper, we present a new single-channel noise reduction method that integrates compensation and soft masking into the same statistical model assumptions for noise-robust speech recognition. By utilizing a Gaussian mixture model(GMM) as a pre-knowledge of speech and added noise signals, the proposed method can effectively restore clean speech spectra and separate out ambient noises from a target speech in the Wiener filter framework. The soft mask methods originally attempted to separate out the speech signal of the speaker of interest from a mixture of speech signals. In the proposed method, by using pre-trained speech and noise models, the soft mask techniques can be applied to separate out added noises from the target speech. Combined with the model-based Wiener filter performing compensation on the power spectrum, the technique can efficiently reduce distortions caused by non-stationary noises and finally reconstruct clean speech spectra from noise-corrupted observation. By applying the result in order to infer the a priori SNR of the Wiener filter, we can estimate the clean speech signal with greater accuracy.
While the conventional Wiener filter causes inevitable distortions owing to noise reduction and does not solve non-stationary noises overlapped with speech presence periods, the proposed method can considerably solve these problems through compensation and soft masking based on speech and noise GMMs. The results evaluated in a practical speech recognition system for car environments show that the proposed method outperforms conventional methods
(1,2) Indian Institute of Technology, Bombay, India (3) Basaveshwar Engineering College, Bagalkot, India
ABSTRACT
Earlier studies on binaural dichotic presentation by spectral splitting of speech signal using a pair of complementary comb filters, for improving speech perception by persons with moderate bilateral sensorineural hearing loss, have shown mixed results: from no advantage to improvements in recognition scores corresponding to an SNR advantage of 2 - 9 dB. The filters used in these studies had different bandwidths and realizations. For an optimal performance of the scheme, the perceived loudness of different spectral components in the speech signal should be balanced, especially for components in transition bands which get presented to both the ears. For selecting magnitude responses of such filters, we have investigated the relationship between the signal amplitudes for binaural presentation of a tone to evoke the same loudness as that of monaural presentation. Listening tests were conducted, on eight normal-hearing subjects, for comparing the perceived loudness of monaural presentations to that of the binaural presentation with different combination of amplitudes for the tones presented to the left and right ears, at 250 Hz, 500 Hz, 1 kHz, and 2 kHz. The sum of the amplitudes of the left and right tones in binaural presentation being equal to that of the monaural tone resulted in monaural-binaural loudness match, indicating that the magnitude response of the comb filters used for dichotic presentation should be complementary on a linear scale. An analysis of the magnitude responses of the comb filters used in earlier studies showed large deviations from the perceptual balance requirement, and those with smaller deviations were more effective in improving speech perception. A pair of comb filters, based on auditory critical bandwidths and magnitude responses closely satisfying the requirement for perceptual balance, was designed as 512-coefficient linear phase FIR filters for sampling frequency of 10 kHz. Listening tests on six normal-hearing subjects showed improvements in the consonant recognition scores corresponding to an SNR advantage of approximately 12 dB. Tests using 11 subjects with moderate bilateral sensorineural hearing loss showed an improvement in the recognition score in the range 14 - 31 %. Thus the investigations showed that binaural dichotic presentation using comb filters designed for perceptual balance resulted in better speech perception.
Ritsumeikan University, Kusatsu, Kyoto, Japan
ABSTRACT
In recent years, the demand for flexible, high-quality speech manipulation has been growing. Conventional vocoder-based methods to extract speech parameters (fundamental frequency and spectral envelope) have been proposed to synthesize natural speech based on these parameters, but although these methods can manipulate the parameters flexibly, the sound quality of the resulting speech is not sufficient for practical use. The STRAIGHT and TANDEM-STRAIGHT methods have been proposed to manipulate speech parameters flexibly and to synthesize high-quality speech. These methods require high-SNR speech to synthesize speech with high quality. In conventional studies, the speech segments are recorded in an anechoic room, a sound proof room, and a recording studio. In our study, we focus on the influence of reverberation time in the recording environment on the sound quality of the synthesized speech. The relationship between the two is observed in a subjective experiment.
The speech segments used for synthesis by STRAIGHT were recorded in an anechoic room. Impulse responses with various reverberation times were applied to all segments, and these segments were then processed by STRAIGHT. The synthesized speech segments were used as the stimuli. All speech segments were comprised of the five Japanese vowels (/aiueo/) by a total of six speakers (three female and three male). These segments were sampled at 44.1 kHz with 16-bit resolution. The impulse responses used were 100 msec (the sound proof room), 550 msec (the living room), and 900 msec (standard stairs). We used the mean opinion score (MOS) to conduct the experiment. Subjects were asked to rank the sound quality of the reproduced stimulus from 1 (poor) to 5 (excellent). Results indicate that the reverberation time in the living room does not affect the sound quality. This means that speech recorded with high SNR is required to synthesize high quality speech with STRAIGHT. The relationship between the SNR and the sound quality of the synthesized speech will be discussed in our future work.
Faculty of Engineering, Oita University, Japan
ABSTRACT
The purpose of this study is to propose an improvement of the speech recognition under the noisy environment. Speech is the most natural form of human communication. And speech recognition has made it possible for computers to understand human voice commands and human languages. Speech recognition is one of the very important technologies. Automatic speech recognition systems are most effective in noiseless environments. However, these speech recognition systems have a weak point, they work best in a noiseless condition, but they work poor performance under the noise condition. If the data are polluted with noise, these speech recognitions are extremely difficult. For the noise reduction of signals, there are filters and spectral subtraction method, and so on. However, there is some limitation in case that the quality of the signal is poor. Therefore, we took notice of this point. This study improves a speech recognition method under the noisy environment by using wavelet transform and weighting coefficients and cepstral analysis. To analyse noise problem, many people have used fourier analysis. But the fourier analysis reveals only the frequency information. The general noise filters reduce specific frequency noise and signal. It is difficult to remove the only noise. To overcome this difficulty, we apply wavelet transform to speech recognition under the noisy environments using cepstral analysis. The wavelet transform is widely used for wave and image analysis. We applied the method of signal decomposition and synthesis by wavelet ransform and weighting coefficients for a specific level and cepstral analysis. This method is applied to the speech recognition. The speech recognition experiments by Japanese noisy digits are performed Two kinds of colored noise are added to the original clear speech data for making noisy data. As a result, speech recognition rate is improved by this method. It was shown that the wavelet transform and weighting coefficients was one of the promising methodologies for filter.
Institute of Acoustics, Adam Mickiewicz University, Poznan, Poland
ABSTRACT
This study aimed to develop a Polish sentence test to measure speech intelligibility against interfering noise in preschool and school children (3-8 years old). The test is based on a 48-word matrix of three columns containing different subjects, verbs and objects. Since all the speech elements were available as separate sound files, it was possible to generate different sentences by juxtaposing randomly selected words from respective columns. However, unlike in the standard matrix sentence test, in this test some word permutations are not allowed in order to avoid generation of semantically incorrect or/and low context utterances. In this way, 256 sentences of a fixed grammatical structure could be generated. The speech reception threshold (SRT), i.e. the signal-to-noise ratio (SNR) providing 50% speech intelligibility and the slope (S50) of an intelligibility function at the SRT point were measured. After the measurements an optimization procedure can be applied to improve the homogeneity of speech material and reduce the standard deviation of SRT estimates.
Department of Psychology, University of California, San Diego, USA
ABSTRACT
The existing literature suggests that native Mandarin speakers can identify lexical tones given only small segments of sound (syllable onset and offset). Using a combination of low onset pitch and either low or high offset pitch, the present study examined how on-line tone perception of Tones 2 and 3 by native speakers of Mandarin, as measured by eye movement data, was influenced by the pitch height of the tone at onset and offset. Participants listened to manipulated tone tokens and selected the corresponding word from four visually presented characters. An EyelinkII eye tracker recorded their eye movements during the entire procedure. The results showed that 90% of final tone judgments were made according to the cue of offset pitch, with high offset pitch as a cue for Tone 2 and low offset pitch as a cue of Tone 3. Low onset pitch served as a cue for Tone 3 and prompted more fixations on the word with Tone 3, until the offset pitches revealed the final tone choices and directed fixations to those words. This finding supports the view that pitch heights at tone onset and offset provide cues in the dynamic process of tone perception.
University of Applied Sciences, Hamburg, Germany
ABSTRACT
This paper investigates impacts of F3 manipulations within given human voice signals. For this purpose two psychoacoustic experiments were carried out. Following the source filter theory of speech production, two modifications to the formant F3 have been investigated, the impact of shifting frequency and of widening bandwidth to perceived vowel quality. These isolated manipulations are possible by means of linear prediction LP analysis. Root extraction of LP data in the z-plane in combination with FIR pole-zero filter design allows parametric formant manipulations. For the sake of control, test sounds are pitched synthetically. For psychoacoustic tests originally spoken reference vowels are used which exemplarily cover a wide range of the vowel quality. In other words, these reference vowels include as much tongue and jaw characteristics as possible. Subjects had to rate similarity of perceived vowel quality of two manipulated sounds against the original reference sound. A general result from the study is that vowel quality perception is rather tolerant against bandwidth manipulations but quite sensitive to frequency manipulations. Only 60% of the subjects perceived vowel quality dissimilarities even when the bandwidth of F3 has been increased by about 1000 Hz. Such bandwidth widening implies a complete elimination of F3 in most of the test sounds. On the contrary, F3 frequency shifts of only 150 Hz already evoked perceptual differences for 80% of the subjects.
University of Hull, East Riding, UK
ABSTRACT
When listening to someone's voice: what stimulus duration is required to tell whether the person speaking is a man or woman; what are the acoustic cues in speech that influence such judgements; and how does manipulations in these acoustic cues influence such judgements? The vowels of five men and five women were recorded and played at a number of brief durations. The vowels were either unmodified and thus differed in both glottal-pulse rate and vocal-tract length (Expt 1), or had their glottal-pulse rate modified to be the same (Expt 2), or had their simulated vocal-tract length modified to be the same (Expt 3). Listeners were required to indicate whether the vowels were spoken by a man or woman. Results show that correct speaker-sex judgement requires only brief duration stimuli (about 10 ms), and that the removal of either the glottal-pulse rate or the vocal-tract length cue leads to reduced performance in judging the sex of the original speaker.
(1) Department of Applied Physics, School of Science and Engineering, Waseda University, Tokyo, Japan (2) Department of Design Information Sciences, Wakayama University, Wakayama, Japan
ABSTRACT
Personal characteristics in voice quality are focused recently because of its efficiency to improve speech recognition performance and speech synthesis quality. Relationship between voice individuality and acoustic feature has been studied to define a perceptual similarity measurement of voice. In most of the studies to evaluate a perceptual similarity with acoustic features, a synthetic speech is used as a stimulus in a subjective experiment to create a very similar impression by an interpolation of two speakers' voices using morphing technique. However, degradation of sound quality was problem. So in this paper, we propose a new method to use imitated voices which were made by 11 mimicry speakers generating 16 target voices. Our method has two special features. First, voice imitation is performed by trial and error basis depending on a speaker's perceptual criteria. And also the voice similarity to target voice will be clearly closer with imitated voice than natural voice in the sense of both perceptually and parametrically. Though a wide variety of acoustic features are focused by imitating speakers, pitch frequency shift and spectral slope are influenced commonly to utter imitation voice. Expecially in case the target voice is far from natural utterance, spectrum in higher frequency and MFCC changes. By comparing his natural voice and target voice precisely, each speaker does his best on trial and error basis.
Second, we define the measurement scale depending on the subjective judgment about the voice quality similarity. Five-lever MOS rating is introduced to score perceptual similarity by the Thurstone paired-comparison methodology by 24 subjects. At the same time, parametric similarity is calculated by Dynamic Time Warping Distances between target and imitated voices and then the correlation is examined between an average of 30 subjects' perceptual score and DTW distance by multiple regression analysis. The coefficient of multiple correlation is 0.66. There is a strong positive correlation between perceptual similarity and acoustic features like spectrum in higher frequency domain, pitch frequency shift and spectrum slope. We found two groups who consider different acoustic features to imitate voice, voice quality and prosody. In the former case, there was a strong correlation in the specrum features in DTW distance with perceptual scores. In the latter case, there was a strong correlation in the spectrum slope and pitch frequency shift in DTW distance. As a result, we illuminate which acoustic features are mainly contributed when a person hear a voice with personal characteristic.
Department of Physics, Government P.G. College, Rishikesh, Dehradun, India
ABSTRACT
Steady State Vowel duration is one of the acoustical cues that provide information about speech intonation. Intonation contours contribute to information about the prosodic structure of an utterance. In tonal languages, linguistic information is also carried in the intonation contours of speech. Thus, sensitivity to speech intonation is an important aspect of speech perception. This paper presents a study of Steady State Vowel duration of Garhwali Hindi syllables abutted with ten vowels / a , a:, i, i:, u, u:, e, e:, o, o:/ in different position of the words i.e. initial, middle and final position. These tokens were spoken in isolation by 15 adult male and 15 adult female speakers. Garhwali Hindi is a regional dialect of Uttarakhand, a newly formed state of India. It has been found that voice consonants have higher steady state vowel duration in comparison of voiceless consonants for both male and female speaker. Male speaker has the higher Steady State Vowel duration than female native speaker.
(1) Laboratory for Language Development, Brain Science Institute, RIKEN, Wako, Japan (2) JSPS Postdoctoral Fellow, Japan (3) Department of Linguistics, Tohoku University, Sendai, Japan (4) Department of Psychology and Neuroscience, Duke University, Durham, USA
ABSTRACT
While Standard (Tokyo) Japanese has a lexical tonal system known as 'lexical pitch accent', there are some varieties of Japanese, called 'accentless' dialects, which do not have any lexical tonal phenomena. We investigated the differences in the perception of lexical pitch accent between the speakers of the accentless dialect and those of Standard Japanese, and the robustness of two approaches to investigate such dialectal differences. We conducted two experiments: categorical perception and sequence recall experiments. The former is an approach that has been traditionally employed to study the perception of phonological contrasts. The latter is a more recent method employed in studies of 'stress-deafness' in French by Dupoux and his colleagues, in which participants listen to sequences of several non-sense words and answer the order of the words. The results of the categorical perception experiment showed no clear dialectal differences. On the other hand, the results of the sequence recall task showed that the scores of the 'accentless' group were clearly lower than those of control (Standard Japanese) participants in the discrimination of non-sense words whose pitch accent differences corresponded to lexical differences in Standard Japanese phonology. Thus, it is concluded that the latter experimental approach is more robust to study dialectal differences in pitch accent perception than the former.
(1) MARCS Auditory Laboratories, UWS, NSW, Australia (2) University of New England, NSW, Australia (3) University of Newcastle, NSW, Australia
ABSTRACT
The Australian Aboriginal language Wubuy, spoken in Eastern Arnhem Land, has a four-way coronal place distinction /t, ṱ, ʈ, c/ but no voicing distinction. This coronal series is contrastive word-medially, word-initially and, by implication, utterance-initially. The existing literature on multiple coronal contrasts, principally from Australian and South Asian languages, however, records two common patterns of (near)-neutralisation of coronal contrasts. Firstly, it is claimed that formant transitions of alveolars and dentals are essentially identical and cannot be used to distinguish between them. Secondly, these languages are commonly reported as neutralising the apical contrast in word-initial position. In part, this neutralisation is thought due to the minimal difference in formant transitions of apical release. The question however remains of what distinguishes dentals from alveolars, and apicals from each other? We recorded three female Wubuy speakers producing the coronal stops in three contexts: word-medial /aCa/, morpheme-initial /a#Ca/, and absolute utterance initial /##Ca/. Among other correlates, we examined the influence of prosodic context on gestural timing in differentiating /t, ṱ, ʈ/, as reflected in the acoustic measures of Closure Duration [CD] and Voice Onset Time [VOT].
For the /aCa/ and /a#Ca/ contexts, the pattern of CD was /ʈ/
MARCS Auditory Laboratories, University of Western Sydney, New South Wales, Australia
ABSTRACT
Speakers alter the way they produce speech according to the communicative situation. Changes are made to enhance the efficiency of information transmission. For instance, when in noisy environments, people speak loudly and pro-duce more energy in higher frequencies (the Lombard effect). This study investigated whether a change in the visual conditions associated with communication would also lead to modification in speech production. More specifically, it examined if auditory prosody would be affected by whether the speaker could see the interlocutor or not. In the ex-periment, two types of prosodic contrasts were included. The first was 'prosodic focus' used by speakers to enhance the perceptual salience of an item. The second was 'prosodic phrasing' which refers to the phrasing of a sentence as a question without using an interrogative pronoun. Four speakers were recorded while completing a dialog exchange task in which the interlocutor could or could not be seen. The results showed that the corner-most vowels recorded in narrow focus and echoic question contexts were produced over longer durations and with a greater vowel space (re-flected by greater vowel triangle area and vowel triangle dispersions) relative to broad focused renditions across both interaction conditions. With the exception of intensity, no other acoustic or spectral properties appeared to be en-hanced at the phonemic level when the interlocutor was not visible to the speaker. This may be due to prosody affect-ing the utterance at more global levels (e.g., word and utterance levels), rather than at the localized vowel level. That is, modifications may be seen between interactive conditions in terms of pitch contours, pre-focal shortening and in-tensity profiles when examined across the whole utterance.
MARCS Auditory Laboratories, UWS, NSW, Australia
ABSTRACT
Familiarity with a talker facilitates perception for both heard speech (where speech from a familiar talker is better identified in noise) and for visual speech (where familiarity with a talker's face assists visual speech recognition). Recently, it has even been shown that the talker familiarity effect can be produced cross modally, i.e., experience in speech-reading a talker facili-tates performance on a SPeech-In-Noise (SPIN) task. The current study examined within and across modal speaker familiar-ity effects with short-term familiarity training and test of transfer to SPIN performance from auditory only (AO), visual only (VO) and Auditory-visual (AV) exposure. The results showed that there was transfer from AO and VO talker familiarization, but not from AV speech. The results are discussed in terms of how the familiarity effect might be sensitive to the degree of bottom-up attention initially paid to a talker's speech.
(1) Department of Psychology, Middle East Technical University, Cyprus (2) Faculty of Letters, Kumamoto University, Japan (3) MARCS Auditory Laboratories, University of Western Sydney, NSW, Australia
ABSTRACT
To understand the now well-established auditory-visual nature of speech perception, it is necessary to understand how it develops. We know that young infants perceive speech auditory-visually by the fact that they perceive the auditory-visual illusion known as the McGurk effect; that visual information use increases over age in English-language children; and that Japanese-language adults use less visual information than do their English-language counterparts. Here we complete the developmental scene and probe the processes involved. In Experiment, with 6-, 8-, and 11-year-old and adult Japanese- and English-language participants tested on a McGurk task, while 6-year-olds from both language groups were equivalently influenced by visual speech information, there was a significant jump in auditory-visual speech perception between 6 and 8 years in English- but not Japanese-language participants. To in-vestigate this further, in Experiment 2 we gave English-speaking 5-, 6-, 7- and 8-year-olds and adults a McGurk ef-fect task as well as a language-specific speech perception (LSPP) test with native- and non-native speech sounds, and reading and articulation tests. For children, but not adults, visual-only speech perception (lipreading) ability and LSSP predicted McGurk performance - children with good auditory-visual speech perception tended to be those who focussed more on native than non-native speech sounds. In Experiment 3, with 3- and 4-year-olds tested for McGurk effect, LSSP, receptive vocabulary, and cognitive skill, regression analyses showed that auditory-only speech percep-tion and cognitive skill, but not LSSP, predicted auditory-visual speech performance. Together the results show that there is an increase in auditory-visual speech perception between 6 and 8 years in English- but not Japanese-language children, and in English-language children this is related to language specific speech perception processes specifically around that age (5, 6, 7, 8 years) and not before (3, 4 years) or after (adults). It is suggested that LSSP is most vari-able and most predictive of visual influence in speech perception in the presence of significant linguistic challenges, such as those at the onset of reading instruction.
MARCS Auditory Laboratories, UWS, Sydney, Australia
ABSTRACT
Second language (L2) listeners' auditory speech perception is more vulnerable to noise than that of first language (L1) listeners. Impoverished auditory perception may cause L2 listeners to rely more on visual speech cues when perceiving speech in noise. The present study examined whether L1 and L2 perceivers might differ in their use of visual speech cues. In the experiment English-Spanish and Spanish-English bilingual participants were tested in a phoneme identification task across 16 English and 16 Spanish consonants (in the context of VCV syllables) that were presented in auditory-only, visual-only and auditory-visual conditions, with or without background babble' noise. The results showed that overall, L1 perceivers outperformed L2 perceivers across all conditions, and both groups improved in auditory-visual compared to auditory-only conditions. L2 listeners' performance showed a greater drop from in-clear to in-noise conditions compared to L1 listeners. Despite the discrepancy between L1 and L2 listeners in performance, the relative degree of improvement in auditory-visual compared to auditory-only conditions was the same for both L1 and L2 listeners. Further, auditory-visual integration efficiency measures showed no significant difference between the L1 and L2 listener groups. These results suggest that L1 and L2 users give similar weight to visual cues in speech perception and indicate that L2 listeners' vulnerability to perceiving acoustic speech cues in noise is not compensated for by better use of visual speech cues.
(1) Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands (2) Donders Centre for Brain, Cognition and Behaviour, Radboud University Nijmegen, The Netherlands (3) MARCS Auditory Laboratories, University of Western Sydney, NSW, Australia
ABSTRACT
The speech infants hear, in the first year of life before they themselves begin to speak, is mainly multi-word utterances, without clear pauses between the words. Thus to construct the initial vocabulary they need to begin speaking themselves, infants need to learn how to segment words from speech. Indeed, there is evidence that segmentation ability in the first year of life correlates positively with vocabulary size at two years. This evidence has come principally from studies of segmentation using the behavioral headturn-preference procedure. If infants first hear words in isolation, and then recognise these familiarised words when they occur later in sentences, they have shown that they can segment individual words out of multi-word utterances. An electrophysiological analogue to this behavioral procedure, measuring Event-Related Potentials (ERPs) was however later developed by Kooijman. This allowed, for the first time, an online assessment of infants' word segmentation. Kooijman tested seven- and ten-month-olds; the ten-month-olds showed a clear recognition response (in the form of a left negativity) for familiarised words heard later in sentences, relative to unfamiliar words. This showed that the ten-month-olds indeed had the ability to segment speech. Such segmentation behavior was not, however, consistently present in the seven-month-olds. We here report three studies relating this ERP measure of speech segmentation to later language development. First, we divided the seven-month-old infants tested by Kooijman into two sub-groups: those with an ERP effect similar to the 10-month-olds' pattern, and those without such an effect. When re-tested at three years of age, the former group displayed significantly higher language scores than the latter group. Second, we examined whether ten-month-olds can recognize words that have previously been presented just once, within an utterance. Recognition was again indicated by a left-frontal negativity, and presence and size of this response proved in later testing to be related to vocabulary size, both at 12 and at 24 months. Third, we conducted a study in which both familiarization and test phases consisted of continuous sentences. Again we observed the same recognition response in the infant brain, and the patterning of this response was once more related to later performance, this time in a test of recognition of known words at 16 months. Hence, with a variety of measures, we see that a consistently observed ERP effect of word segmentation serves as a direct predictor of the degree of later language development.
(1) MARCS Auditory Laboratories, University of Western Sydney, NSW, Australia (2) Haskins Laboratories (3) School of Psychology, University of Western Sydney, NSW, Australia
ABSTRACT
Debate over whether early word learners attend to phonetic details or phonemic structure has hinged on their discrimination of word/nonword minimal-pairs (e.g., "BABY" vs. "VABY"). However, such manipulations (/b/ to /v/) conflate phonetic and phonological changes, making it difficult to tease apart the two accounts. To overcome this, we compared children's identification of familiar words pronounced in a native (Australian English; AusE) and a non-native dialect (Jamaican Mesolect English; JaME), as cross-dialect pronunciations are phonetically, but not phonologically, disparate. We used an eyetracking (Tobii X120) task to assess word identification. Vocabulary size was used as a predictive measure. We compared 15- (N=12) and 19-month-olds' (N=10) looks to corresponding target and distractor images during word repetitions. In all eighteen test trials per dialect, a target word played at the end of a carrier sentence, followed by a second token of the word, then by animation of the target image while a reward phrase played (e.g., "You got it!"). Fifteen-month-olds looked longer to the named target image than the distractor image in AusE [t(11)=2.24, p<.05], but not JaME, suggesting attunement to experienced phonetic details of their regional dialect, while 19-month-olds identified words in AusE [t(9)=5.67, p<.001], and approached significance in identification in JaME [t(9)=2.21, p=.055], suggesting a perceptual shift to recognizing abstract phonological structure. Moreover, vocabulary size, but not age, was correlated with target-looking in the non-native dialect [R2=.18, R=.43, F(1, 21)=4.40, p<.05], suggesting vocabulary plays an important role in driving this perceptual shift. These findings correspond to results from a previous preference study and to other reports indicating expressive vocabulary size is strongly associated with the emergence of phonologically-based word recognition in toddlers.
MARCS Auditory Laboratories, University of Western Sydney, NSW, Australia
ABSTRACT
Visual speech (lipreading) supports speech perception not only when the auditory signal is limited, degraded, or missing, but also in mismatched auditory and visual speech component conditions when the auditory signal is clear and undegraded. While most of this research has been done with segments (consonants and vowels), research at MARCS has now provided evidence for visual speech cues for non-segmental features of language, particularly lexical tone. Early work has shown that Cantonese adults identify visual-only words differing only on tone at a rate significantly above chance; that even non-Cantonese tone (Thai) and non-tone (English) language speakers use visual information to discriminate words differing only in tone; and that one of the most likely vehicles for visual tone information is minute rigid movements of the head. In a more recent study, we investigated visual augmentation for discrimination of Mandarin tones, with F0 information degraded by using simulated cochlear implant (CI) audio. Native Mandarin and Australian English speakers were asked to discriminate between minimal pairs of Mandarin tones in five conditions: Auditory-Only, Auditory-Visual, CI-simulated Auditory-only, CI-simulated Auditory-Visual, and Visual-Only. Discrimination in CI-simulated audio conditions was poor compared with normal audio, but the availability of visual speech information improved discrimination in CI-simulated audio conditions, particularly on tone pairs with strong durational cues, but also for some pairs cued primarily by F0 cues. In Visual-Only, both Mandarin and Australian English speakers discriminated tones above chance, and interestingly, tone-naïve listeners outperformed native listeners, suggesting firstly that visual speech information for tone is available and may be under-used by normal-hearing tone language perceivers, and secondly that the perception of such information may be language general, rather than the product of language specific learning.
In a follow-up study with English-language children, it was found that point-light reductions of visual tone information did not augment tone perception, but tone perception was stronger when their pitch contours were presented as violin sounds rather than as natural speech. In future studies along this line, we will include both Electroencephalography (EEG) measurements and Electroglottograph (EGG)-created stimuli. EEG measurements taken during a native tone perception task will be compared to those taken in a non-native perception task. EGG stimuli will be created from native tone productions to examine perception of native and non-native tones in the absence of segmental information.
MARCS Auditory Laboratories, University of Western Sydney, NSW, Australia
ABSTRACT
Lexical tones are pitch patterns that distinguish between meanings of syllables with an identical segmental structure in languages such as Cantonese. Cantonese has 6 lexical tones that are distinctive in terms of their acoustic, segmental and semantic characteristics. A model of lexical tone representation presented that posits a hierarchy of lexical tone representation, in which acoustic, segmental and semantic characteristics each contribute to tone perception. Two experiments were designed to test this model with Cantonese-speaking kindergarteners, and second- and fifth graders. In Experiment 1 kindergarteners were best at identifying the target tone in high level (55) vs. high rising (25) (same offset of the height, different contours) contrasts than other pairs of tone contrasts, with the worst performance on the mid level (33)-low level (22) (same contour, small height difference) contrasts. In Experiment 2, second- and fifth-graders were tested for tone discrimination in three different segmental contexts and it was found that for both groups tone discrimination was most difficult in different onset/same rime (DO/SR) then same onset/different rime (SO/DR), then different onset/different rime (DO/DR) contexts; DO/SR > SO/DR >DO/DR. In regression analyses in each experiment, it was found that tone perception ability uniquely contributed to early Chinese word reading for young kindergarteners but not for second- or fifth-graders, even after controlling for segmental levels of phonological awareness.There is also a strong association between tone perception and Chinese vocabulary across ages. Interesting, tone perception is also uniquely associated with English word reading for grade 2 children but not for grade 5. Our results support a hierarchical representation model of lexical tones in which the distinctiveness of acoustic cues (pitch height and contour), segmental contexts (vowels and consonants) and semantic levels affect tone perception even up to grade 5, but that while tone perception predicts vocabulary across ages, it only predicts reading ability (in both Chinese and English) for young children. This may be due to the development of better tone perception across acoustic and contextual variation at older ages.
(1) Wakayama University, Wakayama, Japan (2) Ritsumeikan University, Kusatsu, Kyoto, Japan
ABSTRACT
A new spectral estimation method which improves processed sound quality of STRAIGHT, a speech analysis, modification and re-synthesis framework widely used for high-quality speech and singing manipulations, is proposed. Application of the proposed method to TANDEM-STRAIGHT, a completely reformulated version of STRAIGHT, yielded the best spectral envelope approximation among conventional methods such as LPC, cepstrum and legacy-STRAIGHT. TANDEM-STRAIGHT consists of two parts, a temporarily stable power spectrum estimation method of periodic signals (TANDEM) and a spectral envelope calculation method based on consistent sampling theory. The proposed method uses F0-adaptive smoothing and compensation of logarithmic power spectrum, for improving approximation accuracy of spectral peaks, which effects on the quality of re-synthesized sound. A series of simulations was conducted to optimize internal parameters of the proposed method. The optimized system was evaluated and compared with conventional methods using stylized spectra and simulated speech spectra. The evaluation was based on a spectral distance measure proposed by Itakura and Saitou with modification to perceptually relevant ERB-N number frequency axis. The following set of spectra were used. Power spectra calculated from vocal tract area functions measured using MRI data with LF-model excitation spectra were used as the grand truth and spectral distances between this target and the estimated spectra were evaluated. A set of periodic pulse train was used for excitation signal in this case. These evaluation results indicated that the proposed method yields the smallest spectrum distance among conventional methods such as LPC, cepstrum and legacy-STRAIGHT.
(1) U2S Signals and Systems Research Unit, National Engineering School of Tunis, Tunisia (2) GIPSA-Lab Image, Speech, Signal and Automatic Group, National Polytechnic Institute of Grenoble, France
ABSTRACT
The harmonic plus noise model (HNM) is widely used for spectral modeling of sounds which combine harmonic and noise components, like speech and signals produced by a series of musical instruments. A simplified and efficient version of the HNM, developed by Stylianou et al., splits the frequency band of the signal into two bands: a harmonic part for low frequencies and a noise-like part for high frequencies, separated by a time-varying cut-off frequency. In this study, we propose to model the time trajectories of the HNM model parameters for non-stationary signals, especially focusing on speech signals. This is done for time intervals in the range of a hundred of milliseconds and more, thus significantly longer than usual short-term time frames used in analysis/synthesis models and in speech codecs. The goal is to capture and exploit the long-term correlation of spectral components, as can appear across spectral parameters extracted from consecutive short-term frames.
Previous works by Firouzmand et al. dealt with long-term parametric modeling in the more general framework of the sinusoidal model (i.e. long-term modeling of amplitude and phase parameters). We propose to extend this work to the HNM framework in order to obtain a complete long-term HNM model. In this latter case, the parameters to be modeled on the long-term basis are the spectral envelope (that encompasses the harmonic and noise regions), the fundamental frequency (which characterizes the harmonic region) and the cut-off frequency (which separates the harmonic and noise bands). To do this, the speech signal is first segmented into voiced (actually mixed voiced/unvoiced) sections and unvoiced sections, and a discrete cosine model is used for representing the time-trajectory of HNM parameters over each entire section. To estimate the long-term parameters, we propose an adaptation of the fitting algorithm previously used by Firouzmand et al. in the sinusoidal case to the new framework of HNM. Signal resynthesis using the long-term parameters is also described. The proposed long-term HNM model can be used for music and speech analysis/synthesis. It allows joint compact representation of signals (thus a promising potential for low bit-rate coding) and easy signal manipulation directly from the long-term parameters (for e.g. time stretching by direct interpolation). We present several experimentations to prove the efficiency of this model. For instance, the proposed long-term HNM is compared to the short-term version in terms of listening quality and coefficient rate.
Delft University of Technology, The Netherlands
ABSTRACT
The combination of a binaural beamformer with an auditory model-based localizer and post-processor is presented. The intention of the design was to maintain a high degree of quality and listening ease while enhancing speech intelligibility. Therefore, a binaural and fixed minimum variance distortionless response beamformer was employed to establish a medium directivity at low self noise and high binaural naturalness. By applying models of binaural interaction at the two-channel output, sound sources are localized and separated. The process of binaural interaction was extended by an auditory model of across frequency interaction in the localizer and a model of modulation perception in the post-processor. To adapt the post-processor to the scene and to keep the introduced distortion at a low level, the parametric output of the localizer was used to steer the aperture of a spatial filter during post-processing. For that reason, Bayes theorem was applied to calculate the a posteriori probability of the target source in a complex acoustic scene and to use this probability for the formulation of a data-driven filter. The localizer and post-processor were assessed in a range of acoustic arrangements using an objective intelligibility and quality measure for nonlinearly processed speech. The results show a significant improvement in situations with one interferer and no decline of intelligibility and quality in more complex situations.
Institute of Acoustics, Chinese Academy of Sciences, Beijing, P.R.China
ABSTRACT
An open spherical array of Bucky Ball design was provided. Beam steering property of this design was discussed based on a spherical harmonic. Regarding to physics of sound propagation, algorithm of calculating coefficients of spherical sound field was also studied based on field decomposition using spherical harmonic. Field distribution has been analyzed by using spherical Fourier transform. The properties of the Bucky Ball design has been compared with that of other designs in directivity index (DI) and white noise gain (WNG). Beam pattern of the open spherical array was also analyzed. Simulation results for multiple sources separation and detection show that the Bucky Ball design has an advantage of suppressing the side-lobe of the pattern while keeping DI and WNG.
(1) The HEARing CRC, Melbourne, Victoria, Australia (2) National Acoustic Laboratories, Sydney, NSW, Australia
ABSTRACT
Electronic communications equipment such as telephones, two-way radios, computers, amplified hearing protectors and hearing aids can reproduce noise with a loudness in excess of the speech they reproduce. Noise that is louder than conversational speech is typically perceived as being less comfortable and in some cases can cause injury to the listener, such as producing an acoustic shock injury or a hearing loss. The conventional approach to controlling loud noise is to use a sound level limiter however conventional sound level limiting suffers from several shortcomings. Firstly, there is always a compromise when setting a limiting level: if it is set to a high level then the listener can be subjected to loud sound; but if it is set to a low level the speech will be limited, which will reduce its quality and intelligibility. Secondly, conventional methods of sound level limiting do not adapt to the sound to which the listener is acclimatised. A new approach to sound level limiting is to use the loudness of the speech that the listener is hearing as a reference and reduce the loudness of non-speech sounds with respect to this reference. This novel method is called Speech Referenced Limiting (SRL). The limiting level is adaptive and automatically set by the loudness of the speech to which the listener is acclimatised. When done on a frequency specific basis an umpires whistle is reduced to the maximum level of the treble of a recent conversation and the rumble of a truck to the maximum level of its bass. This is achieved by estimating the maximum loudness of speech at different frequencies to produce a speech reference and limiting sound that exceeds this reference. A digital signal processing algorithm has been developed to perform the method. Details of the SRL scheme and experimental data on the effects of SRL on speech and noise are presented.
Ritsumeikan University, Kusatsu, Kyoto, Japan
ABSTRACT
CENSREC-4 (Corpus and Environment for Noisy Speech RECognition 4) evaluation framework has been distributed for evaluating distant-talking speech under various reverberation environments. The CENSREC-4 includes both the real reverberant speech and the simulated reverberant speech with convoluting impulse responses in the same environment. In addition, it consists of many room impulse responses to simulate various environments by convolving with clean speech signals and these impulse responses in real environments. It however has not been evaluated how variable reverberant impulse responses in this contains. We thus try to evaluate CENSREC-4 with our proposed reverberation criterion based on C value of ISO3382 Annex A acoustic parameters. We specifically focus on criteria to represent the difficulty of reverberant speech recognition, and also confirm why it is difficult to easily evaluate the recognition accuracy in a part of CENSREC-4 data sets with our proposed reverberation criterion. We have already proposed the reverberation criterion with C value of ISO3382 Annex A acoustic parameters to represent the difficult of reverberant speech recognition, and we have tried to estimate the performance of distant-talking speech recognition based on the impulse response between the speaker and microphone. First we investigated the relation between the C value and the accuracy of reverberant speech recognition based on measured impulse responses. We then calculated a regression curve approximated by exponential regression analysis in each reverberant environment. We finally tried to estimate the recognition accuracy in various reverberant environments with CENSREC-4.
We carried out evaluation experiments to confirm the difficulty to easily evaluate the recognition accuracy in a part of CENSREC-4 data sets. As a result of evaluation experiments, we confirmed that recognition accuracy could be estimated with 0.5 % error in 250 ms (T[60]) environment, with 2.9 % error in 450 ms (T[60]) environment, with 4.6 % error in 600 ms (T[60]) environment and with 20.2 % error in 850 ms (T[60]) environment on reverberant. We could achieve to accurately estimate the recognition accuracy of reverberant speech in light reverberation environment when the relation between C value and the recognition performance is approximated by exponential function. Consequently, we also confirmed that it was difficult to estimate the accuracy of reverberant speech recognition in heavy reverberation environment with CENSREC-4. We therefore confirmed that CENSREC-4 contained very challenging and variable reverberant data.
Department of Biomedical Eng, Amirkabir University, Tehran, Iran
ABSTRACT
This paper presents a new method for reconstructing unreliable spectral component which uses statistical distributions of former and later reliable frames and reliable components of current frame. In this technique, first, a HMM is used to model the temporal variation of clean speech signal. Then using this model and according to probabilities of occurring noisy component at each states, a distribution for noisy components is estimated. Finally, by applying MAP estimation on mentioned distribution final estimation of this unreliable component is obtained. The proposed method has been compared to a recent missing feature method which is based on clustering feature vectors and exhibits a significant enhancement in two different noisy environments.
Ritsumeikan University, Kyoto, Japan
ABSTRACT
The acoustic sound field understanding is recently focused with the advance of a computer. It could especially enables the acoustic sound detection, the acoustic sound identification and the acoustic sound recognition as same as human being subject to its realization with higher performance. In addition, it is possible the archive of the acoustic sound field with high quality subject to the realization of the acoustic sound field dictation which includes not only the human voice but also the environmental sounds. Thus in this paper, we try to realize the acoustic sound field dictation. This system is effective for the security systems and so on, because it can quickly search an abnormal sound based on the text information from a captured long signal. The environmental sound identification is indispensable for realizing the acoustic sound field dictation system. The conventional research for environmental sound identification was only conducted with the method which individually models all sound sources. However, it is impossible to model the innumerably environmental sounds in the real world. Therefore in our research, we try to reduce number of the models by utilizing the onomatopoeia. The onomatopoeia can remind an acoustic sound with a text. We should thus firstly aim at the environmental sound identification with Hidden Markov Model (HMM) based on the onomatopoeia for realizing acoustic sound dictation.
We carried out preliminary experiment with real environmental sounds for investigating optimum parameters that can identify the environmental sounds. As a result, we confirmed that the optimum parameters for HMM based on the onomatopoeia should include 16 kHz sampling frequency, 16 orders MFCC (Mel-Frequency Cepstrum Coefficient), eight states, and 128 mixtures. We continuously investigated the correspondence relation between the environmental sounds and the onomatopoeias with "the pure models" and "the complex models" for environmental sounds, corresponding to "the phonemic models" and "the word models" for the voices. As a result of experiment for an environmental sound identification, we confirmed that the proposed approach decreased the identification errors by comparison with the conventional approach. Finally, we carried out the subjective evaluation experiment with identification results of acoustic sounds in the real world. As a result of the subjective evaluation experiment, we confirmed that the proposed approach can easily remind an acoustic sound with an identification result compared with the conventional approach. Therefore, the proposed approach accurately realized the acoustic sound field dictation with easy-understanding for human being.
Ryukoku University, Shiga, Japan
ABSTRACT
Information support for hearing impaired people is addressed. Automatic speech recognition which converts speech to texts is promising for supporting hearing impaired people, and several studies have been investigated; such as automatic captioning for TV program or automatic transcription of oral presentations, lectures and meetings. These studies mainly focused on how to recognize speech accurately, and did not pay attention to how to display caption texts. How to display caption texts has not been a significant problem because a single speaker usually talks in TV news, oral presentations, or lectures. On the contrary, how to display caption texts to make it easily to understand who is talking is important on meetings in which more than one person participate. In TV programs or movies, a caption text is just displayed on the bottom side of a screen. The display method, which is called "TV-type caption" in this paper, is inadequate for meetings because it is hard to understand who is talking. Accordingly, we have proposed a novel caption display system which shows caption texts with speech balloons near speakers' faces based on automatic face detection and speech recognition. In this paper, we evaluate an effect of speech balloon caption comparing with TV-type caption through a survey with a questionnaire in terms of appearance, readability of a caption text, and comprehension. We confirmed that speech balloon caption is adequate in terms of appearance and comprehension when several speakers exist. TV-type caption is suitable in terms of appearance and readability of a caption text when a single speaker talks.
Institute of Biomedical Engineering, National Yang-Ming University, Taiwan.
ABSTRACT
Mandarin is one of the tonal languages. In Mandarin tones, there are four lexical tones (tone 1 to tone 4) with four different fundamental frequency (f0), such as flat and high, rising, falling and then rising, and falling, respectively. In order to process signal on lexical tone, at first we have to identify which tone is. We would like to find out an efficient approach to identify Mandarin tones by the segments of fundamental frequency contours. In this study, 3 male and 3 female participants engaged in recording speech materials. All participants are native Mandarin speakers, no history of any speech or hearing disorder, and all passed articulation and voice assessment. There are two target syllables (/ti/ and /tu/) of four lexical tones used for the materials. In our experiment, we analysed the signal features and the acoustics characteristics that included the range of f0, the average of f0 and so on. Then we could predict which tone is from what the segments told. The result of this study revealed that the segments of the contours could not target the corresponding tone correctly. The approach of this study may not provide a way for hearing-devices to predict Mandarin tone before signal processing. The futher study for prediction by segments of f0 contours is required.
Nagoya University, Chikusaku, Nagoya, Japan
ABSTRACT
Target speaker-like feature generation method is proposed for rapid acoustic model adaptation. Speech recognition performance degrades by many factors as noise environments, speaking styles, individual difference, and so on. Especially, speaker-independent speech recognition under various environments, as in the case of PC-based distributed speech recognition systems, becomes much difficult. To solve this problem, acoustic model adaptation to the specific speaker and the environment by MLLR (Maximal Likelihood Linear Regression), or normalization-based training and recognition by SAT (Speaker Adaptive Training), are often used and are very effective. However, it is necessary for MLLR to prepare some quantity of utterances matched to the target speaker and the environment. This problem is same as for SAT and any other adaptation methods. Here after, we discuss only speaker adaptation, but the discussion can include environmental adaptation.
In this paper, we propose a technique to generate enough amounts of target speaker-like speech features by converting pre-prepared large amount of speech features of many speakers (reference speakers) into features similar to those of the target speaker using a transformation matrix obtained by CMLLR (Constrained MLLR) technique. To generate a large amount of target speaker-like features, the system needs very small amount of the target speaker's utterances. Using the target speaker-like features, we can adapt the acoustic model efficiently. When applying this method to all the reference speakers, the method is almost equivalent to SAT theoretically. However, we combine similar reference speaker selection (SRSS) to the method, which cannot be used in the SAT framework. Using SRSS, only the features of speakers originally similar to the target speaker is used for the adaptation, which makes adaptation more efficient than using all reference speakers. To evaluate the proposed method, we prepared 100 reference speakers and 12 target (test) speakers, each of them had an average of 150 utterances. We compared our proposed method with MLLR and the method theoretically equivalent to the SAT in an isolated word recognition task using speech database collected by a real PC-based distributed environments. The baseline performance was 58% obtained by speaker-independent model. Using only three utterances as adaptation data, average word accuracy of 65%, 61%, and 58% were obtained by proposed method with 10 speakers selected by GMM-based SRSS, with all 100 reference speakers without speaker selection, and with MLLR, respectively. These results proved that proposed method could enable significantly faster model adaptation than conventional MLLR and SAT.
(1) Iwate Prefectural University, Iwate, Japan (2) Tsukuba University, Ibaraki, Japan (3) AIST, Ibaraki, Japan
ABSTRACT
Recently, Spoken Term Detection (STD) that identifies the target section of user's interest in spoken documents has been one of the hottest topics in spoken document processing. We have proposed a subword-based STD method to deal with out of vocabulary query terms, and have demonstrated newly proposed subwords such as 1/2 phone and Sub-phonetic Segment worked well for STD. The paper improves the STD performance by integrating plural STD results obtained by using plural language models. We prepare three different types of language models for each subword such as monophone, triphone, 1/2 phone and SPS using three different speech corpora that consist of a JNAS (Japanese Newspaper Article Sentences) corpus that includes read speeches of newspaper articles with their pronunciations, CSJ (Corpus of Spontaneous Japanese) that includes actual presentation speeches, and our WEB dictionary whose entries were collected by searching keywords with their pronunciation for WWW texts. We used 50 presentation speeches in CSJ for test data and the rest of about 2600 presentation speeches in CSJ for training subword language models. Subword based speech recognition using each subword language model is performed for spoken documents. Three subword recognition results are obtained, and a DP matching process is performed between a sequence of subword models of a query and the three results of subword model sequences of spoken documents. Here we use subword phonetic distances between any two subword models to recover subword recognition errors. Three cumulative distances are computed for each candidate section. These distances are integrated linearly and each candidate section is re-ranked according to the integrated distance. The proposed method mentioned above could improved the STD performance in any two and three STD results, and could confirm the effectiveness integrating plural STD results obtained by different subword language models.
(1) Institute of Communication Systems and Data Processing, RWTH Aachen University, Germany (2) Infineon Technologies, Sophia-Antipolis, France
ABSTRACT
The necessity of dereverberation algorithms in hand-held speech communication systems is discussed in this contribution.
The study is based on a new measurement campaign with artificial head and a two-microphone mock-up phone
in realistic acoustical environments like an office, corridor and stairway. Based on objective speech quality measures
as well as a listening test, we show that room reverberation can lead to a decrease in intelligibility even for hand-held
telephony under certain conditions. Hence, the far-end listener can benefit from dereverberation algorithms in the sending
device. All measured room impulse responses are available online as part of the Aachen Impulse Response (AIR)
database.
Speech/Language Information Reasearch Center, Electronics and Telecommunications Research Institute, Daejeon, Korea
ABSTRACT
In this paper, we present a new single-channel noise reduction method that integrates compensation and soft masking into the same statistical model assumptions for noise-robust speech recognition. By utilizing a Gaussian mixture model(GMM) as a pre-knowledge of speech and added noise signals, the proposed method can effectively restore clean speech spectra and separate out ambient noises from a target speech in the Wiener filter framework. The soft mask methods originally attempted to separate out the speech signal of the speaker of interest from a mixture of speech signals. In the proposed method, by using pre-trained speech and noise models, the soft mask techniques can be applied to separate out added noises from the target speech. Combined with the model-based Wiener filter performing compensation on the power spectrum, the technique can efficiently reduce distortions caused by non-stationary noises and finally reconstruct clean speech spectra from noise-corrupted observation. By applying the result in order to infer the a priori SNR of the Wiener filter, we can estimate the clean speech signal with greater accuracy.
While the conventional Wiener filter causes inevitable distortions owing to noise reduction and does not solve non-stationary noises overlapped with speech presence periods, the proposed method can considerably solve these problems through compensation and soft masking based on speech and noise GMMs. The results evaluated in a practical speech recognition system for car environments show that the proposed method outperforms conventional methods
(1) Wakayama University, Wakayama, Japan (2) Ritsumeikan University, Kusatsu, Kyoto, Japan
ABSTRACT
Strong vocal expressions in singing use hoarse voice effectively in various manners. However, analysis and synthesis of such voice quality have been a challenging topic with virtually little success. An excitation structure extraction framework called XSX was introduced to represent such complex structured vocal excitation with various types of aperiodicity as an integral component of TANDEM-STRAIGHT, a widely used speech analysis, modification and resynthesis framework. TANDEM-STRAIGHT is basically a source-filter model extended by introducing temporally stable power spectral representation for periodic signals and F0 adaptive spectral envelope estimation based on the consistent sampling theory. The excitation source signal used in TANDEM-STRAIGHT is a mixture of pulses and colored random signals. The source signal parameters are extracted by XSX and an aperiodicity extraction procedure. XSX is based on spectral division and inverse Fourier transform of power spectra by their spectral envelopes, which were calculated for a set of periodicity candidates. Combining salience scores for each candidate yields an integrated measure to detect locally periodic components. The aperiodicity extraction procedure is based on long-range linear prediction of band-pass signals by a set of Quadrature Mirror filters applied to the original and the time-warped signals. This data-driven approach enables to extract and represent complex excitation structures such as diplophonia. The analysis results are used to design voice excitation source, which is capable of adding/modifying hoarse vocal expressions and enables morphing between two or more expressive performance examples.
Graduate School of Science and Engineering, Yamagata University, Japan
ABSTRACT
In the practice, the performance of speech recognition systems is affected by speech signals being corrupted with various background noises in the environment. In this paper, we propose a new word graph combination (WGC) approach for speech-in-noise recognition. The aim of this work is to develop a method that would ensure robust speech recognition under various noise conditions, and in particular, under the adverse effect of environmental and impulsive noise. For this purpose, we developed a word graph combination (WGC) technique in which both continuous-mixture hidden Markov models (CMHMMs) and discrete-mixture hidden Markov models (DMHMMs) are being used as acoustic models. It has been previously verified that a DMHMM-based system can ensure significant improvements in the speech recognition performance under impulsive noise conditions. We also showed that the CMHMM-based system indicated better performance in high SNR conditions and environmental noise conditions. On the grounds of the above mentioned findings, we adopted a system combination approach in which both a DMHMM and a CMHMM are used. With the proposed method, complementary effects can be anticipated because the CMHMM and the DMHMM exhibit different error trends. Among the existing combination methods, which include recognizer output voting for error reduction (ROVER) and confusion network combination (CNC), in our work, we selected the technique of WGC. Unlike conventional combination approaches, like ROVER and CNC, the timing information for all word hypotheses is well preserved in the WGC. In the speech recognition experiments we performed, the proposed system showed better performance than the ROVER-based system or the baseline system. In particular, this new system showed comparatively higher performance under mixed noise conditions.
Graduate School of Science and Engineering, Yamagata University, Japan
ABSTRACT
We investigate the performance improvement in an automatic evaluation system of English pronunciation uttered by Japanese learners. In this system, Japanese and English acoustic models are used to detect mispronunciation of a phoneme level. We use hidden Markov models (HMMs) as acoustic models. English and Japanese HMMs are trained by using speech data uttered by native English and Japanese speakers, respectively. Mispronunciation is detected by comparing output likelihoods of the two models. In order to improve the performance of this system, we investigate the following points: (1) Reduction in an acoustic mismatch. Because of the use of speaker-independent acoustic models, a mismatch in speaker characteristics arises between an input speech and acoustic models. In addition, the mismatch between recording environments must be considered. Therefore, we attempt to reduce the acoustic mismatch by using cepstral mean normalization (CMN) and histogram equalization (HEQ) methods. (2) Analyses of the effectiveness of pronunciation error rules. In order to detect the pronunciation errors in a phonetic level, the system uses pronunciation error rules. We compare some error rules to clarify which rules are effective in evaluating pronunciation. In order to evaluate the proposed methods, we investigated the correlation between an objective evaluation value returned by the system and the subjective evaluation value given by English experts. We used the English Read by Japanese (ERJ) speech corpus as evaluation data. In this corpus, each utterance was given a score on the basis of a five-grade evaluation made by the experts. We use the score as the subjective evaluation value. The experimental results showed that the combination of CMN and HEQ was most effective. From the results of comparison of error rules, four error rules were found to be particularly effective: vowel insertion at the end of a word, vowel substitution, vowel insertion between consonants, and consonant substitution.
(1) Institute of New Media and Communications, Department of Eletrical Engineering, Seoul National University, Korea (2) Department of Electronics and Information Engineering, Sejong University, Korea
ABSTRACT
Our research is to develop the low-complexity broadband microphone beamformer which has robustness to moving interference signal. In this paper, an improved broadband GSC-RLS beamformer structure with fast self-tuning algorithm is proposed. Also, the improved fast self-tuning algorithm for the proposed GSC-RLS structure is developed, based on the Song's method. The computational complexity of the proposed GSC-RLS structure with self-tuning algorithm is notably reduced. And the simulation result shows that the proposed beamformer has better performance than the conventional GSC-RLS algorithm and has lower computational complexity than conventional self-tuning GSC-RLS method.
(1) National Institute of Advanced Industrial Science and Technology (AIST), Japan (2) University of Tsukuba, Japan (3) Iwate Prefectural University, Japan
ABSTRACT
We are interested in retrieving information from speech data using subword-based search. This paper describes an efficient two-stage approach using sub-phonetic segment N-gram index and shift continuous dynamic programming for open vocabulary spoken term detection. With this two-stage search, we attempt to improve performance in both retrieval accuracy and process time. In the speech recognition process, a more sophisticated subword that is shorter than phonemes is used to minimize the effect of recognition error. Then, in the indexing and search process, N-gram and block addressing techniques are adopted to improve the search speed. In addition, in order to reduce missed errors in indexing, the N-best hypothese are directly added to the inverted index. We investigate the properties of each method and examine their usefulness for the open vocabulary spoken term detection task.
(1) ETRI, Daejeon, Korea (2) Chungnam National University, Daejeon, Korea
ABSTRACT
As mobile devices become multi-functional, and multiple devices converge into a single device, there is a strong market need for an audio codec that is able to provide consistent quality for mixed speech and music content. In this paper, we propose unified speech and audio coding technology which could provide best quality for both speech and music content. We designed new codec architecture based on two strategies. Firstly, we reused tools from existing audio and speech codec and found best combination. Instead of newly designing coding tools for enhancing the efficiency, we investigate the performance of existing tools for reusing. Each tool is individually evaluated for its performance by appropriate evaluation procedure and found the best combination. Secondly we revised each tools and developed new tools for harmonization of each tools. To enhance the performance, each tools included in the selected combination is revised. The selected combination should be suitably changeable according to the characteristics of input signals for the consistent harmonizing performance. To do that, signal analysis tool and harmonization tool are newly introduced.
Our codec architecture consists of various coding modules. The Signal State Decision (SSD) module which is newly introduced in order to utilize the characteristics of input signals and categorizing whether steady state signal or complex state signal. The parametric stereo coding and bandwidth extension module is reused after modification to enhance the performance. All modules are controlled by the output of SSD module. The coding scheme of Steady State Signal Processing module is based on the LPC-residual coding'. Another core coding for encoding the complex state signal, the transform based coding scheme is adopted. The Signal State Harmonization (SSH) module works for compensate the problem of block discontinuity or artifact between Steady State Signal Processing module and Complex State Signal Processing module. For the evaluation of new codec architecture, we followed MPEG audio listening test procedures. Three categorized (music, speech, mixed) items are used and audio experts are joined for the listening test. The listening test results show that the performance of new codec is statistically better than state-of-art speech and audio codec for each categorized items. The new codec architecture could be used Digital radio, Mobile TV, Audio books and so on, which need consistent quality for both speech and music signals.
School of EE&T, University of New South Wales, Australia
ABSTRACT
Objective speech quality measurements can be made more accurately and robustly by analyzing individual distortions of the afflicted speech signal. In previous papers ``Salient Formant Points'' (SFP) that are extracted from the output of a hydromechanical, physiologically motivated cochlear model have been used to predict the perceptibility of Temporally Localized Distortions (TLD). The feature represents areas of high energy vocal tract resonances resolved in the output of the cochlear model. TLDs include afflictions best described using words such as ``Fluttery'', ``Babbly'', ``Harsh'' and ``Interrupted'' and represent the highest variance amongst speech signals subjected to coding, network errors as well as environmental noise. In previous work we have reported the high correlation between the predicted and actual subjective Diagnostic Acceptability Measure (DAM) elementary perceptual quality (EPQ) scores that represent these distortions. In this paper we investigate the algorithm's tolerance to the alteration of various factors that affect the accuracy of the TLD prediction. The parameters investigated include misalignment between the original and degraded speech signals; inaccurate voiced/unvoiced decisions; inaccurate speech level normalization at the input to the cochlear model (which affects the output of the non-linear cochlear mode). Results are illustrated for each these factors.
(1) Acoustic Technology, Department of Electrical Engineering, Technical University of Denmark, Lyngby, Denmark. (2) GN Research, GN ReSound A/S, Ballerup, Denmark
ABSTRACT
Feedback oscillation is one of the major issues with hearing aids. An efficient way of feedback suppression is feedback cancellation, which uses an adaptive filter to estimate the feedback path. However, the feedback canceller suffers from the bias problem in the feedback path estimate. The recent progress suggests a feedback canceller with linear prediction of the desired signal in order to eliminate the bias when certain conditions are met. However, the bias still remains in many situations, for example when the input signal is voiced speech. Noise injection is investigated in this paper to help reduce the bias further and improve the system performance. Two nearly inaudible noises are proposed: a masking noise, which is tailored to the hearing-aid application, and a linear prediction based noise, which is especially efficient for feedback cancellation with linear prediction. Simulation results show that noise injection can further reduce the feedback estimation error by 1-4 dB and/or increase the stable gain by 3-4 dB, depending on the characteristics of the input signal.
Ryukoku University, Shiga, Japan
ABSTRACT
Automatic speech recognition (ASR) for multilingual audio contents, such as international conference recordings and broadcast news, is addressed. For handling such contents efficiently, a simultaneous ASR is promising. Conventionally, ASR has been performed independently, namely language by language, although multilingual speech, which consists of utterances in several languages representing the same meaning, is available. We have proposed a bilingual speech recognition framework based on statistical ASR and machine translation (MT) in which bilingual ASR is performed simultaneously and complementarily. In this simultaneous recognition framework, ASR should calculate not only an acoustic model score and a language model score but also a translation model (TM) score. In this study, we investigate an efficient calculation method of TM scores. A TM score represents how a sentence corresponds to another sentence of different languages. In general, between different languages a word can be translated into various words. Moreover, word orders are different. Considering these characteristics, it is preferable to model TM scores statistically. In a statistical translation model, every word is modeled to have a possibility to translate into every word. For instance, for the matching (alignment) of n-words sentence and m-words sentence, there are n to the m-th power word-alignments. For a strict calculation of statistical TM scores, first, we calculate a probability of each alignment, and then, calculate the sum of them. However, this calculation costs too much, and it is inadequate for a real time system. In this study, we try to reduce the computational cost. Specifically, since for almost all alignments, their probabilities are much smaller compared with the highest alignment probability, we regard the highest alignment probability as a TM score. We compared TM score calculation methods in terms of time and accuracy in the Japanese ASR using English information based on bilingual recognition framework. We achieved the reduction of processing time significantly for TM score calculation without any degradation of ASR accuracy.
(1) Tokyo University of Science, Suwa, Japan (2) Yamanashi Eiwa College, Japan
ABSTRACT
Currently, many researchers work on singing synthesis in Japanese or English etc. However, there are few researches on singing synthesis in Chinese. So, this paper studies four-tone modeling for natural singing synthesis in Chinese.
Four tones are characteristics of the Chinese syllable, which are modeled as follows: the 1st tone is a horizontal linear function, the 2nd tone is a linearly increasing function, the 3rd tone is a quadratic function and the 4th tone is a linearly decreasing function. Four types of four-tone models are defined in order to clarify an optimal four-tone model. These models have different changing duration and changing range of fundamental frequency. Proposed four-tone models are controlled by a parameter which determines the changing rate of fundamental frequency. This paper describes three types of subjective evaluations in order to determine an optimal model and parameter. These evaluations are performed based on the Scheffe paired comparison. Performed evaluations are as follows: the first evaluation determines an optimal model among four types of models, the second evaluation determines a sub-optimal parameter and the third evaluation determines an optimal parameter.
As a result of the first evaluation, the following things have been clarified about the fundamental frequency control for natural singing synthesis: the 1st tone is no need to change the fundamental frequency from that of a score, the fundamental frequency of the 2nd tone is controlled to change at the last half of duration and the fundamental frequency of both the 3rd and the 4th tones are controlled to change at the first half of duration. As a result of the second evaluation, the sub-optimal changing rate is between 1.0% and 2.5%. It leads the improvement of synthesized singings of naturalness, where the 2nd, 3rd and 4th tones are given the same changing rate. As a result of the third evaluation, it is clarified that the optimal changing rate for the 2nd, 3rd and 4th tones are 1.5%, 1.0% and 1.5% respectively. Synthesized singing of naturalness can be improved by applying the optimal four tone model and parameter comparing with the synthesized singing without considering the four tones. This paper also compares a changing rate of fundamental frequency for synthesized singings with that for speaking voices.
EE Dept, Indian Institute of Technology Bombay, India
ABSTRACT
An electrolarynx, a verbal communication aid used by laryngectomy patients, is a vibrator held against the neck tissue to provide excitation to the vocal tract, as a substitute to that provided by the glottal vibrations. Although the user can set the vibration level and pitch, a dynamic control of level, voicing, and pitch during speech production is not feasible. In addition to this basic limitation, the electrolaryngeal speech suffers from (i) presence of background noise caused by leakage of acoustic energy from the vibrator and vibrator-tissue interface, (ii) low-frequency spectral deficiency, and (iii) unnatural quality due to constant pitch and level. Background noise decreases the intelligibility, while the other two factors affect the speech quality. Present study involved investigations for improving the intelligibility and quality of electrolaryngeal speech. Pitch-synchronous application of generalized spectral subtraction was used for reducing the background noise. In order to track the variation in the spectrum of the leakage noise due to changes in vibrator orientation and pressure during speech production, a dynamic estimation of noise was carried out from a set of past frames. The estimated noise spectrum was subtracted from that of the noisy speech and the resulting magnitude spectrum was combined with the original phase spectrum. The speech signal was resynthesized using overlap-add method, with two-pitch period analysis frames and one period overlap. Estimation of phase spectrum by minimum-phase assumption and the assumption of phase continuity did not improve the speech quality. An introduction of jitter and shimmer in the speech signal, using LPC based analysis-synthesis, was investigated for improving its naturalness. The excitation for synthesis was an impulse train with the frequency equal to that of the vibrator, with random frequency and amplitude modulations for providing the jitter and the shimmer, respectively. An FIR filtering of the excitation was used to match the long-term average spectral envelope of the processed electrolaryngeal speech to that of the normal speech. A peak-to-peak jitter of up to 6 % increased the naturalness, while introduction of shimmer decreased the quality.
Indian Institute of Technology, Bombay, India
ABSTRACT
Most of the children with prelingual profound hearing impairment do not learn to speak properly despite a fully functional speech production system as they lack auditory feedback. Speech-training systems providing visual feedback of vocal-tract shape are found to be useful in improving vowel articulation. However, vocal-tract shape estimation based on linear predictive coding (LPC) and other analysis techniques generally fails during stop closures, restricting the effectiveness of visual feedback for production of consonants not having visible articulatory efforts. Production of vowel-oral stop consonant-vowel utterances involves movement of articulators from the articulatory position of the initial vowel towards that of the oral stop closure, and then to that of the final vowel. A technique for estimation of place of articulation during stop closures by performing bivariate polynomial modeling on time-varying vocal tract area values, estimated using LPC, during transition segments preceding and following the closure has been reported earlier. The places estimated using the technique compared well with the actual constriction locations observed from the articulatory data.
Investigations using the technique showed that the estimation accuracy depended on the correct identification of the pre- and post closure transition segments. The closure release is generally in the form of a short frication burst. As the LPC based estimation of area values is not applicable during the frication, the closure release burst needs to be excluded from the post-closure transition segment, without deleting any significant part of the transition. A technique using a rate of change function applied on the parameters of the Gaussian mixture model of short-time log-magnitude speech spectrum is investigated for accurately locating the closure burst and excluding it from the post-closure transition segment, for an improved estimation of the place of maximum constriction during the closure.
(1) Graduate School of Science and Technology, Kumamoto University, Japan (2) Kumamoto University, Japan
ABSTRACT
Speech features such as formants of vowels uttered by many talkers are considered to form a normal distribution in each phoneme on a feature space. However, those features may apparently show the different dispersions peculiar to the estimation methods. Therefore, if the correct distributions can be found by a credible method, it will make clear the definition of feature estimation errors so that the comparative evaluations between the feature estimation methods will become possible. In this paper, we first propose the data reduction method to estimate true formant distributions of vowels. In the method, we apply the principal component analysis to the formant data of each vowel on a F1-F2 space to search an average value and a three-sigma ellipse. If the average and the ellipse are searched iteratively after removing the outside data of the ellipse regarded as errors, they finally converge. The proportion of the data samples within the final ellipse to all data will be different in the formant estimation methods. We consider that the estimation method of larger proportion is higher in the accuracy because of the high trust. IFC (Inverse Filter Control) method, in which formants are estimated from zero-crossing information, has been compared with LPC method under the above criteria. As a result of the analysis using vowels in words, it has been shown that the IFC method is superior to the LPC in the proportion and the ratio of area in the final ellipse to that in initial one. The proportions of data within the final ellipses are 91-96 % in IFC and 85-95 % in LPC, which are obtained from five Japanese vowels uttered by 20 males. The intuition obtained by observing the states of distributions supports the numerals in the analysis. Based on the results, we conclude that the formant estimation using zero-crossing information (IFC) is more effective than that by spectral shapes (LPC).
Engineering Physics, Institute of Technology Bandung, Indonesia
ABSTRACT
Since 2008, intercepted speech communication can be accepted as one of legal evidence in Indonesians court. The first case of using this evidence in the court was a case of corruption handled by the Indonesian Corruption Eradication Commission (KPK). To take the speech evidence to court, such speech evidence should be examined by a Speaker Identification Forensic Examiner to verify the identity of the speaker. Two groups of speech samples were needed. The first group is the intercepted voice belongs to unidentified speaker, while the second is the voice recording of an identified suspect. Examiner serves to compare the two sample groups and confirm the identity of speaker's voice in the first sample. This examination process is strongly influenced by processes that occur while the second sample group was recorded from the suspect. If the defendant cooperated with investigators, then the identification process will go well, and vice versa. This paper shows the development of Forensic Speaker Identification methods which are used in handling corruption cases in Indonesian court. Results of identification using real data as well as a simulated speech signal were also discussed.
Institute of Communication Systems and Data Processing, RWTH Aachen University, Germany
ABSTRACT
When evaluating new algorithms for speech and audio coding or enhancement systems (e.g., noise reduction, echo control, or artificial bandwidth extension), one will usually listen to audio examples on headphones and not use any loudspeaker setup that might be available. The reasoning behind this choice is that using a headphone reproduction system makes it easier to identify even small signal processing artifacts which would be at least partly concealed by
room reflections in listening rooms.
Usually, these artifacts due to coding or signal enhancement can not be completely removed but only minimized with respect to the constraints of the application. Examples could be a limited data rate for speech and audio coding or a trade-off decision between noise attenuation and speech distortion in noise reduction algorithms.
Based on the aforementioned superiority of headphones for making these artefacts noticeable, this contribution presents a postfilter that mimics the properties of listening rooms to conceal residual errors and artifacts. This postfilter is a finite impulse response filter that is designed according to measured or simulated room impulse responses.
The main focus of this contribution lies on the evaluation of different types of impulse responses for a reverberation-based
postfiltering of speech signals that were transmitted by speech codecs at low data rates. In an exemplary study based on the Adaptive Multi-Rate Wideband (AMR-WB) speech codec, the proposed post-processing leads to an increase
in the speech transmission index (STI), which indicates a better intelligibility. Optimized impulse responses for the different data rates of AMR-WB are given in order to maximize the STI.
Technical University of Berlin, Berlin, Germany
ABSTRACT
The employment of the Extensible Markup Language (XML) is not to be excluded in today's Text-to-Speech (TTS) systems any longer. XML comes not only for the internal data structures and to the modularity of the systems into application, but also for communication into Client server architectures is consulted. In past work the advantages are however not used by XML for the step of the Transcription. This aspect represents frequently a module locked in itself, which does not permit an exchange of the Transcription rules or only heavily. But straight rule-based Multilingual TTS of systems would profit from the integration of different sentences from Transcription rules. A standardized notation would simplify also the exchange between groups of researchers. The available contribution suggests a XML based notation for rule-based TTS of systems, with which a quantity of context sensitive rules for the grapheme phoneme Transcription system can be described independently. As Proof-of-Concept typical applications are illustrated over the XML notation.
Faculty of Engineering, Nagasaki University, Nagasaki, Japan
ABSTRACT
One of the characteristics of spontaneous speech is the occurrence of many types of filled-pauses that usually degrade the speech recognition performance considerably. In this paper, firstly, we investigate the occurrence rate of filled-pauses in spontaneous speech using a large corpus. Next, based on the obtained results, we propose a two-step recognition procedure for spontaneous speech that is augmented by unsupervised estimation of filled-pauses uttered by each speaker. We used the Corpus of Spontaneous Japanese (CSJ) to analyze the occurrence of filled-pauses in spontaneous speech. It consists of numerous academic presentation speeches. The investigation results revealed that the cumulative occurrence frequency of filled-pauses reaches 0.8 with only four specific filled-pauses on average, and these frequent filled-pauses were differed among speakers. According to these characteristics of the occurrence of filled-pauses, we propose a speech recognition procedure that is constructed from two recognition processes by using a common and an individual lexicon. These were respectively used in the first and in the second processes. The filled-pause entries in the individual lexicon were estimated based on their occurrence frequencies that were observed from the preparatory results derived from the first recognition process. For the evaluation of this recognition procedure, we used ten-male speeches which containing approximately 27k words from the CSJ. The word entry size of each lexicon was 40k, where the number of independent types of phonetic transcriptions of filled-pause entries in the common lexicon was 284, and that in the individual lexicon was reduced to 9.4 on average using the filled-pause estimation process. The proposed procedure demonstrated a statistically significant increase in word accuracy (1.1% word-error reduction), and also indicated that filled-pauses that are rarely used by speakers hinder improvements in word accuracy. We also showed that the individual lexicon that was configured from a combination of N-best results and word confidence score limitation induced a significant improvement (1.3% word-error reduction). Furthermore, we examined methods to reduce the processing time by implementing multi-hypotheses and confidence score limitations. Our procedure achieved a significant improvement of 41% reduction in the number of recognition segments of the first recognition process by using the N-best results and the word confidence score limitations.
University of Applied Sciences, Hamburg, Germany
ABSTRACT
This paper investigates impacts of F3 manipulations within given human voice signals. For this purpose two psychoacoustic experiments were carried out. Following the source filter theory of speech production, two modifications to the formant F3 have been investigated, the impact of shifting frequency and of widening bandwidth to perceived vowel quality. These isolated manipulations are possible by means of linear prediction LP analysis. Root extraction of LP data in the z-plane in combination with FIR pole-zero filter design allows parametric formant manipulations. For the sake of control, test sounds are pitched synthetically. For psychoacoustic tests originally spoken reference vowels are used which exemplarily cover a wide range of the vowel quality. In other words, these reference vowels include as much tongue and jaw characteristics as possible. Subjects had to rate similarity of perceived vowel quality of two manipulated sounds against the original reference sound. A general result from the study is that vowel quality perception is rather tolerant against bandwidth manipulations but quite sensitive to frequency manipulations. Only 60% of the subjects perceived vowel quality dissimilarities even when the bandwidth of F3 has been increased by about 1000 Hz. Such bandwidth widening implies a complete elimination of F3 in most of the test sounds. On the contrary, F3 frequency shifts of only 150 Hz already evoked perceptual differences for 80% of the subjects.
(1) The University of Tokushima, Tokushima, Japan (2) Chiba Institute of Technology, Chiba, Japan (3) Tohoku University, Sendai, Japan (4) Tohoku Bunka Gakuen University, Sendai, Japan
ABSTRACT
Many speech coding systems, such as CELP, MELP, and so on, employ a vector quantizer in order to encode a sequence of speech feature vectors. Increasing a size of codebook brings lower quantization distortion, but it also brings higher bit rate. In order to decrease bit rate without high distortion, a segment quantizer are used instead of a vector quantizer. It can encode a sequence of feature vectors efficiently by using temporal correlation between vectors.
One of the most important problems is how to split a sequence into segments. Splitting by fixed length is the simplest method, but each segment should correspond to something chunk with variable length, for example phoneme, sub-word, and so on. The LZSQ method can acquire variable length segments automatically. However, this method cannot acquire optimum segmentation because it determines segment boundaries one after another.
In this paper, a new speech coder based on the ML-BEATS is proposed. The ML-BEATS represents a sequence of speech feature vectors by many HMMs. Each HMM corresponds to each segment, and boundaries are appropriately determined by maximum likelihood criterion.
There are three phases to construct a speech coding system based on ML-BEATS.
1. Constructing a HMM-based segment codebook by ML-BEATS.
Parameters of HMMs corresponding to segments are trained by using training speech samples. The training samples are segmented by using the HMMs, and the HMMs are trained by using the segmented training samples. Both steps are carried out repeatedly, and finally HMMs are constructed. The HMMs are used as codebook.
2. Encoding an input speech
Input speech is encoded by using the HMMs. The Viterbi algorithm is used in order to make a correspondence between HMMs and frames in the input speech. Finally, HMM index and duration information are sent to decoder.
3. Decoding by using HMM speech synthesis method
At a decoder, a sequence of speech feature vectors is generated by using HMM parameters. This algorithm was proposed for HMM-based speech synthesis, and it can generate a sequence with maximum likelihood.
In order to investigate an effectiveness of the proposed method, LSP sequences were encoded. The proposed method gave 1.13 dB spectral distortion with 5.83 bit par frame.
Faculty of Systems Science and Technology, Akita Prefectural University, Japan
ABSTRACT
One of the purposes for sound emission in public space is to transfer the information involved in it. Since sound wave with the audible frequency has its wavelength comparable to the objects around us, it is difficult to avoid its propagation where it is not required, due to diffraction and reflection. If the information in sound can be conveyed at the desired local spot in the sound field, the communication with sound yields new property beyond its physical limitation. Although the parametric loudspeaker based on ultrasound is useful in order to fulfil such need, it can limit the "direction" of sound propagation, not the local "spot." In this paper, another approach for the reproduction of speech signal at local spotmis introduced. It is based on signal decomposition into the orthogonal basis function made from the random vectors. This approach was applied to the transaural system by Negi et al. It has some difficulties, however, in the reproduction of speech signal at the local spot. One of them is that the contents of speech can be heard from the synthesized signal at the point except the desired spot, although its quality is degraded due to its decomposition into random signals. As far as the target of our study is focused on the reproduction of speech signals, location of the sound sources, by which the decomposed random signals are emitted, is related to the difficulty in understanding of the contents of speech. The performance is not appropriate when the sound sources are located at the same distance from the desired spot. The contents of the synthesized speech can be heard at the point around the desired spot in this case. Location of the sound sources with their distance from the spot distributed has potential to improve the performance. In this paper, the relation between some sound source locations and the synthesized signals based on its decomposition into random signals is discussed via computer simulation, and the synthesized speech signals are demonstrated and evaluated with a few measures.
Ritsumeikan University, Kusatsu, Kyoto, Japan
ABSTRACT
The automatic speech recognition (ASR) under noisy environments is focused as one of the challenging topics. Especially, the talking-speech under noisy environment much distorts compared with neutral talking-speech under quiet one. This distortion is called the Lombard effects, and ASR performance degrades by them. They should strongly occur subject to no auditory feedback for speaker. In conventional research, their features tend to be ascent of power, ascent of fundamental frequency (F0), flat of spectral envelope and higher-frequency shift of the first order formant frequency (F1) and the second order formant frequency (F2). Therefore, the ASR performance without any especially operations degrades by affecting such features.
To overcome this problem, they had proposed to re-design acoustic model with a lot of Lombard speech, to adopt acoustic model with some Lombard one, and so on. However, it is indispensable for above both approaches to supply the sufficient amount of Lombard speech in advance. Thus in this paper, we propose the new approach based on the voice conversion towards neutral speech from Lombard speech. This approach has the advantages which are without not only the operation for acoustic model but also the sufficient amount of Lombard speech in advance. We try to improve the Lombard speech recognition performance with the voice conversion technique towards neutral speech. We firstly analyze F0, F1 and F2 features with Lombard speech and neutral one in detail. As a result, we confirmed that F0, F1 and F2 features with Lombard speech are higher frequency than neutral ones. In addition, we also confirmed the variances of the ascending rate for F1 and F2 features are smaller than the one for F0 feature. We therefore decided to employ F1 and F2 as the features for voice conversion, and we finally convert Lombard speech to neutral speech by equalizing the ascending rates for F1 and F2 features. We carried out evaluation experiments on computer simulation. As a result of experiments, we confirmed the ASR performance increases to 12 % for females and 3 % for males with proposed method. We therefore confirmed that the Lombard speech can be robustly recognized without redesigning the acoustic model. In the future, we will try to analyze other features such as spectral envelope and much improve the Lombard speech recognition performance.
Széchenyi István University, Györ, Hungary
ABSTRACT
In virtual audio synthesis we use different excitation signals for listening tests. These tests are executed using headphone playback and real-time HRTF synthesis. Besides noise signals, speech is often used for tests especially for real life applications, such as mobile phones, various voice transmission lines, computer environments or accessories for the visually impaired, where speech intelligibility is important. This paper presents an overview of results of a listening test using different signals including speech in a virtual audio environment aimed at blind persons. Results show how speech contributes to the accessibility to computers and performs in a comparative test for virtual audio simulation. Furthermore, additional speech test signals are introduced, such as speech chorus signals, segmented spondees and the newly developed spearcons.
Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing, P.R.China
ABSTRACT
This paper proposes a novel speech enhancement (SE) algorithm based on estimating expected values of speech cepstra (EVSC), which will be herein referred as EVSC-SE. Unlike the conventional SE algorithms, where the a priori signal-to-noise-ratio (SNR) is estimated from expected values of speech spectra (EVSS) directly, the proposed EVSC-SE algorithm estimates the a priori SNR from the EVSC. Under the Gaussian assumption of speech signals, we propose two approaches to estimate the EVSC. One is a novel cepstral subtraction approach, which is the estimation-based approach. The other is a modified cepstrum thresholding approach, which is the detection-based approach. Compared with EVSS-based SE algorithms, the proposed EVSC-SE algorithm is capable of tracking the a posteriori SNR at word onsets and offsets rapidly, achieving less speech distortion. At the same time, the EVSC-SE algorithm could suppress non-stationary noise effectively. Simulation results show that the EVSC-SE algorithm outperforms the EVSS-based SE algorithms in terms of segmental SNR and log-spectral distance.
(1) Graduate School of Science and Technology, Ryukoku University, Shiga, Japan (2) Faculty of Science and Technology, Ryukoku University, Shiga, Japan
ABSTRACT
Drums and bass guitars create rhythm in popular music. A drum pattern database including thousands of music ex-cerpts was previously developed for investigating the tendency of constitution of drum in musical score. For the in-vestigation of bass guitar, several studies have been done, such as estimating the fundamental frequencies (F0) of the melody or bass lines using monaural audio signals containing sounds from various instruments and focusing on the bass guitar to classify genres. However, the tendency for bass guitar in popular music using many patterns has not been investigated; therefore, the development of a database for bass guitar profiles, such as the rhythm and pitch, has not been reported. We propose an identification method for the MIDI tracks of the bass guitar from MIDI excerpts comprised of several instrumental tracks. The proposed method identifies the bass guitar part using a heuristic ap-proach from several bass guitar players, so that an accumulation of patterns of bass guitar is obtained. Then, each one-bar length is identified as either a bass guitar part, called a "bass guitar pattern" or not. The onset, interval, and dynamics profiles are extracted from the bass guitar patterns. The onset profile represents the onset time sequence, the interval profile represents the interval for each note from the root note of the excerpt, and the dynamics profile represents the rank in value of the MIDI velocity for each note. We constructed a bass guitar pattern database com-prised of these three types of profiles. We introduce several parameters for automatic arrangement, such as those from the proposed database, using principal component analysis (PCA). We call these parameters the "Eigenphrase of bass guitar". Using PCA, we extract the principal components for the "Onset profile for eigenphrase of bass guitar", "Interval profile for eigenphrase of bass guitar", and "Dynamics profile for eigenphrase of bass guitar" from the onset, interval, and dynamics profile databases, respectively. The eigenphrase for a bass guitar is a combination of the three profiles of eigenphrase of bass guitar in terms of the onset, interval, and dynamics of a bass guitar. We propose an ar-rangement method for the bass guitar part by multiplying each principal component vector from the eigenphrase of bass guitar with relative weights. The generated performances from automatic arrangement using the proposed method were confirmed as natural were judged by the authors as natural not artificial.
Laboratoire d'acoustique de l'université du Maine, France
ABSTRACT
Electroacoustic systems have been used in the past to simulate the self oscillations produced by a mechanic system like a violin string or wind instruments. For a wind instrument, the exciter block can be replaced by an electric circuit providing energy and a non-linear feedback to the acoustic resonator. These systems can be simpler to model than the real instrument, while allowing a general behaviour close to that of the original. Compared to a numerical simulation, this system allows to maintain the exact behaviour of the resonator while the shape of the non-linear function relating incoming and outgoing pressure waves can be finely adjusted, or at least known with great precision. This knowledge however requires a good choice of electroacoustic transducers and a good description of its coupling to the acoustic resonator. After a brief description of the system and its modes of resonance, this presentation will focus on the application of such a system to the study of attack transients. In particular, the interest is to predict the regime the instrument will stabilise into after a particular attack envelope imposed to its control parameters.
(1) POems team, INRIA, Rocquencourt, France (2) UME, ENSTA, Palaiseau, France
ABSTRACT
Construction of a physical model for the grand piano implies complex and multidimensional phenomena. We present a model of piano strings coupled to a soundboard, and its numerical approximation. Measurements on piano strings and bridge show phantom partials and a time precursor that both cannot be explained by the linear scalar string model. A classical model of nonlinear strings by Morse & Ingard implies we should consider the longitudinal displacement as well as the standard transversal displacement of the string, in a nonlinear coupled system. Various approximate (polynomial) models have been written from this one, by expanding the nonlinearity (a square root term) around the rest position of the string. We provide a mathematical justification of the most used model. Transmission of the string motion to the rest of the structure is essential from the acoustical point of view. We use a modal approach for the soundboard, and we write a nonstandard reciprocal coupling condition between strings and soundboard at the bridge. Numerical approximation of such a nonlinear, multidimensional and coupled problem is a difficult issue. We use an energy approach to achieve stability, which leads to an innovating implicit numerical scheme.
ENSTA ParisTech, Paris, France
ABSTRACT
The coupling between strings and soundboard at the bridge is essential in determining the quality of piano tones. The objective of the work is to investigate the coupling properties through time domain modeling of a string triplet coupled to the input admittance. For this purpose, various input admittances were measured on a strung upright piano soundboard and modelled as a parallel set of oscillators. These oscillators are coupled to a linear finite difference modeling of slightly detuned strings. This model accounts for the reaction of the bridge on string waveforms, and thus is different from usual methods were the bridge velocity is obtained through convolution between string forces and admittance impulse response. A comparison between both methods is presented in the paper.
Each oscillator of the admittance is defined by three parameters: amplitude, frequency and damping factor from which, equivalently, mass, stiffness and resistance are derived. Modifying the amplitudes allows studying the effects of variations of the global mobility pattern, without affecting individual frequencies and damping. This can be related, for example, to variations of thickness of the soundboard. Variations of selected frequencies simulate selected variations of modes, as in the addition of stiffeners, for example. Finally, variations of frequency-dependent losses in the soundboard material can be simulated through modifications of the set of damping factors. Simulated string and bridge waveforms yield a better understanding on the effects of both string tuning and bridge mobility on the amplitude, duration and decay pattern of the tone. As it is well-known by piano makers and tuners, the bridge show large variations of input admittance along its compass. As a consequence, tuners try to compensate these variations by acting on string tension, in order to avoid to much irregularities in sound power and duration in the piano register.
Research Center for Applied Sciences, Academia Sinica, Taiwan
ABSTRACT
Saxophone reed has been made of the giant reed, Arundo donax L. It belongs to the Poaceae family. It has been shown that the water extractable sugars in the reed greatly affect its frequency response and hence the tone richness of the musical instrument. Good musical performance has also been found to be associated with the anatomical characteristics of Arundo donax L. Bamboo, which also belongs to the Poaceae family, has similar longitudinal fiber structure and polysaccharide composition as in Arundo donax L. It is intriguing to investigate whether bamboo is suitable for making saxophone reed. Culm of grown up bamboos with diameter of 10 to 15 cm was used in this study. Straight internode was sawed and then shaped to dimension of 72 x 16 x 3 mm using a CNC milling machine. The obtained bamboo chips were then manually shaped to an alto saxophone reed using a reed profiling machine. Arundo donax L. reeds made from the same profiling machine were used for comparison. Reed strength was measured by a strength grading system.
Three species of bamboo from two genera, bambusa and phyllostachys, were tested. Bambusa oldhamii Munro belongs to the genus bambusa. Phyllostachys makinoi Hayata and Phyllostachys pubescens Mazel (moso bamboo) belong to the genus phyllostachys. Only moso bamboo yields reeds that have stable dimension. Reeds made of the other two species distort significantly after hydration.
The timbre of the alto reeds made of Arundo donax L. and moso bamboo was analyzed by their audio spectrogram. A microphone with flat frequency response from 20 Hz to 20 kHz was used for sound recording. The timbre analysis was focused on the spectral distribution. Spectra from the lowest note B flat (measured fundamental 279 Hz, note C sharp in piano) to the highest note F sharp (measured fundamental 888 Hz, note A in piano) played by an alto saxophone using both types of reeds were analyzed by Adobe Audition and a free software SPEAR. Harmonics and subharmonics were analyzed and compared. The result will be reported in the full paper.
Laboratory for the Mechanics of Solids UMR 7649, École polytechnique, 91128 Palaiseau Cedex, France
ABSTRACT
The vibrations of the soundboard of an upright piano in playing condition are investigated. It is first shown that the linear part of the response is at least 50 dB above its nonlinear component at normal levels of vibration. Given this essentially linear response, a modal identification is performed in the mid-frequency domain [300-2500] Hz by means of a novel high resolution modal analysis technique (Ege, Boutillon and David, JSV, 2009). The modal density of the spruce board varies between 0.05 and 0.01 modes/Hz and the mean loss factor is found to be approximately 2%. Below 1.1 kHz, the modal density is very close to that of a homogeneous isotropic plate with clamped boundary conditions. Higher in frequency, the soundboard behaves as a set of waveguides defined by the ribs. A numerical determination of the modal shapes by a finite-element method confirms that the waves are localised between the ribs. The dispersion law in the plate above 1.1 kHz is derived from a simple waveguide model. We present how the acoustical coincidence scheme is modified in comparison with that of thin plates. The consequences in terms of radiation of the soundboard in the treble range of the instrument are also discussed.
Australian National University, Canberra, Australia
ABSTRACT
Inverse problems in musical acoustics have a long history. Essentially they involve deducing something about the structure of the instrument simply by examining its sound. Can you tell the metal from which a flute is made from its sound? Can you hear the shape of a drum? Can you hear the shape of a wooden beam? What makes a Stradivarius violin sound special? This paper will examine some of these problems to find those which are, at least to some degree, answerable. Among the questions considered in addition to those above are hearing the diameter of a metal rod, discovering the diameter of an organ pipe, and deducing the shape of a bent rod. While many of these problems are only of academic' interest, since direct measurements can usually be made, others such as that concerning the violin can often be approached only in an inverse manner. The answers to such inverse problems - even partial answers - can be of practical importance in helping to understand and improve the sound and ease of performance of musical instruments.
(1) Center for Advanced Science and Innovation, Osaka University, Japan (2) School of Engineering, Osaka University, Japan (3) Graduate School of Engineering, Osaka University, Japan (4) Graduate School of Science and Technology, Kumamoto University, Japan (5) School of Psychological Science, Health Sciences, University of Hokkaido, Japan
ABSTRACT
It is often heard that musicians and acoustic engineers are interested in understanding how tonal quality is related to the acoustics of a room, for a musical performance. This study is the second of two works that investigate the dependency of the timbre of musical tones on room acoustic condition. In the previous paper (Part I), the subjective effect of room acoustics on the timbral brightness of clarinet tones was examined through a listening experiment. The results revealed that room acoustic condition significantly affects the timbral brightness of clarinet tones. Further, the room acoustic effect differs according to the produced note and dynamic level (e.g., piano or forte). In order to investigate these results objectively, temporal and spatial parameters were measured from binaural room impulse responses (BRIRs), and the spectral centroid (fc) of semi-anechoic and reverberant signals were analyzed acoustically. The results of a linear regression analysis, which involved varying the fc_scaled (fc values as scaled by the range of fc within the nine stimuli with the same note) factor and inputting nine combinations of perceived brightness (three dynamic levels x three produced notes) for each room acoustic condition, indicated that perceived brightness linearly increases with fc_scaled. This result is in accordance with previous studies on the timbral brightness of musical tones. Further, the results suggest that each of the regression constants correlates with certain room acoustic parameters.
(1) Wakayama University, Wakayama, Japan (2) Ritsumeikan University, Kusatsu, Kyoto, Japan
ABSTRACT
Strong vocal expressions in singing use hoarse voice effectively in various manners. However, analysis and synthesis of such voice quality have been a challenging topic with virtually little success. An excitation structure extraction framework called XSX was introduced to represent such complex structured vocal excitation with various types of aperiodicity as an integral component of TANDEM-STRAIGHT, a widely used speech analysis, modification and resynthesis framework. TANDEM-STRAIGHT is basically a source-filter model extended by introducing temporally stable power spectral representation for periodic signals and F0 adaptive spectral envelope estimation based on the consistent sampling theory. The excitation source signal used in TANDEM-STRAIGHT is a mixture of pulses and colored random signals. The source signal parameters are extracted by XSX and an aperiodicity extraction procedure. XSX is based on spectral division and inverse Fourier transform of power spectra by their spectral envelopes, which were calculated for a set of periodicity candidates. Combining salience scores for each candidate yields an integrated measure to detect locally periodic components. The aperiodicity extraction procedure is based on long-range linear prediction of band-pass signals by a set of Quadrature Mirror filters applied to the original and the time-warped signals. This data-driven approach enables to extract and represent complex excitation structures such as diplophonia. The analysis results are used to design voice excitation source, which is capable of adding/modifying hoarse vocal expressions and enables morphing between two or more expressive performance examples.
(1) Sound & IT Development Division, Yamaha Corporation, Hamamatsu, Japan (2) Faculty of Architecture, Design and Planning, University of Sydney, NSW, Australia
ABSTRACT
Modern electronic instruments not only provide improved tonalities but also allow players to select a monitoring method between loudspeakers (public) and headphones (private). Therefore, an ideal electronic instrument, such as an electronic piano, would require reproducing a perceptually similar sound for both reproductions. While the tonal quality remains relatively the same for both reproductions, a headphone-reproduced sound is distinctively different from that reproduced by a loudspeaker, primarily because the spatial coordinates of a headphone sound tends to synchronously follow a player's head movement. A system utilizing a motion track sensor might enable the headphone sound to remain steady. However, such a system faces several challenges, including the latency of processing and the timbre change. Here, we present the results and provide the details of a new method developed for reproducing piano sound via headphones; this method primarily adjusts the level of difference between the left and right headphone signals according to a player's horizontal head movement, i.e., yaw. For the level adjustment, the authors measured the interaural level differences (ILDs) of each key of a grand piano varying with the yaw angle. These ILDs enabled a headphone piano sound to rotate toward the opposite direction of head movement. Coupled with the motion tracking sensor attached to headphones, the proposed method could stabilize the headphone sound of a piano, regardless of the player's active movement during performance. A subsequent analysis revealed that the eighty eight sets of ILDs could be equivalently reduced to six subsets, by grouping adjacent keys that have similar ILDs. Further, the six sets of ILDs were fitted into six equations that parametrically represented the measured ILDs. A subsequent informal listening test on the proposed method showed that players could perceive steady, natural, and present piano imagery.
(1) Graduate School of Science and Technology, Ryukoku University, Kyoto, Japan (2) Faculty of Science and Technology, Ryukoku University, Kyoto, Japan
ABSTRACT
Many researchers have developed systems to automatically evaluate the proficiency of musical performances. Their systems require users to input a musical score even though it is not useful to do so. We have developed a system that can evaluate player's proficiency for basic performance using a MIDI drum kit without the need to input a musical score. The proposed system is designed to estimate a musical score from the player's performance on the basis of the Bayesian method. The method of evaluating performance proficiency on a previous study is used in this work. To estimate a musical score for an inputted performance, an adaptation probability from player's performance is calculated from a database of 47,000 patterns. A posterior probability of each pattern is then calculated by multiplying the adaptation probability by a prior probability, where the database is used to obtain the prior probability from the occurrence frequency of each pattern. Finally, the pattern with the highest prior probability is used as the estimated musical score of player's performance.
Using the occurrence frequency of the patterns as a prior probability for estimating is difficult because focusing solely on the occurrence frequency of the patterns does not show the similarity among them. In other words, when the occurrence frequency of the pattern is employed as a prior probability, the pattern with low frequency with respect to the database is rarely estimated even if the pattern is actually played. Thus, the occurrence frequency of the pattern is smoothed based on the Parzen window. First, the Euclidean distance between patterns is calculated on the basis of the difference of notes. Second, the value of the frequency of occurrence for neighbor patterns are divided by the value of Euclidean distance between them and added to the occurrence frequency of the current pattern. Finally, the prior probability of current pattern is obtained from the smoothed occurrence frequencies. An evaluation experiment was done in which the musical score is estimated by using our method from recorded performances. The F-measure is calculated by comparing the intended score, i.e., the solution, and the estimated one. As a result, .93 of F-measure was obtained. Therefore, the performance of the proposed method was evaluated as good by the result of the experiment. Our method has been implemented as a support system for self-learning of the drums, and thus enabled it to estimate both musical score and the player's proficiency.
Applied Acoustics Lab., Institute of New Media and Communications, School of Electrical Engineering and Computer Science, Seoul, Korea
ABSTRACT
Large temple bell is a religious percussion instrument whose unique sound determined by their shapes and materials. Korean temple bell has been casted for more than 1200 years. The forms of the ancient Korean temple bells have varied with the eras, the profile, like a barrel and the beautiful engravings on the body. Even though there are common characteristics of Korean temple bell sound such as beat and frequency ratio of fundamental to hum, Korean temple bell offers a wide spectrum of the characteristics of sound, which cause individual differences in preference.
In this paper, we adopted one of the nonlinear time-series analysis method known as recurrence plot. While conventional frequency domain analysis investigates each frequency components of time-series signal respectively, recurrence plot looks into the relationships of components and visualize the hidden structures of time-series signals in 2D space. This effort is taken a step further by the quantification of recurrence plot elements. Quantities obtained from recurrence quantification analysis(RQA) reveals stationary and nonstationary structures in the sounds of Korean temple bells. Preference of musical instruments by human subjects are determined by several criteria. To mimic preference decision of human subjects, we further process quantities obtained from RQA to merge them into the preference decision making. This is conducted by the PROMETHEE, one of the multicriteria decision making methods. The results are represented in the form of ranking to compare with subjective test. Rank correlation such as Kendall's coefficient and Blest index are ulitized to show similarities between suggested model and preference test by human subjects.
(1) The Bionic Ear Institute, East Melbourne, Victoria, Australia (2) The University of Melbourne, Victoria, Australia
ABSTRACT
Music is often composed of different melodic lines that are played together, either on the same or different instruments. These melodic lines, or streams', are often defined or separated by a number of perceptual parameters, such as pitch, timbre or loudness. One important aspect of listening to music is to be able to hear these melodic lines separately and in comparison to each other. Hearing impairment, particularly using a cochlear implant, reduces the perceptual differences between auditory sources, thereby reducing auditory stream segregation and affecting the ability to enjoy music. Cochlear implant users are known to have poor perception of pitch and timbre but relatively good perception of time-based sound features, such as rhythm. Musicians, on the other hand, have extensive training in auditory streaming and in using subtle acoustic cues to separate sound sources. The aim of this study was to examine the effect of four acoustic parameters on the difficulty of extracting a simple 4-note melody from a background of distracter notes. Melody extraction difficulty ratings were recorded while four acoustic parameters of the distracter notes were varied separately: fundamental frequency (F0), intensity, temporal envelope and spectral envelope. The average difficulty ratings for listeners with normal hearing and no musical training (N=19) were compared with two other groups - musicians with normal hearing (N=18) and cochlear implant (CI) users (N=11).
The average difficulty ratings for musicians were lower than for non-musicians for all four parameters, reflecting the effect of training on auditory streaming. For CI users, difficulty ratings were higher when the distracter notes varied in F0 and the spectral envelope. These results reflect the difficulty that CI users have in pitch and timbre discriminations. However, CI users reported difficulty ratings within the range of non-musically trained listeners when the distracter notes varied in intensity and temporal envelope. These results likely reflect the operation of the CI sound processor, which presents gross spectral and temporal envelope cues well, but does not resolve individual harmonics of the fundamental frequency (F0) or fine timing cues. The results have implications for the design of new CI sound processors that will enhance music appreciation through the artificial enhancement of specific acoustic cues.
Hitotsubashi University, Tokyo, Japan.
ABSTRACT
Several studies were carried out about emotional communication by music. From these finding, it was elucidated which it is possible to convey listener intention of musician (Juslin, et al. And so on). However, it examined from the impression evaluation of listener and the performance of acoustic characteristic. It is not examined to influence of the musician side in these ways. Therefore in this study, we examined motion and impression of listener by the motion capture system in the performance. In this time, we examined from two-sides, impression evaluation by SD method and motion of the arms and sticks (velocity, acceleration, moving distance, height and lowest) by the motion capture in the snare performance of several difficult emotions. As a result, it was related to the performance motion and the impression of listener from performance. For this reason, we found the performer has detailed altered body motion to purposive perform.
(1) Graduate School of Science and Technology, Ryukoku University, Japan (2) Department of Science and Technology, Ryukoku University, Japan
ABSTRACT
The musical scores have been used as described media by many users because they have been used frequently in user's practice of playing. However, not many scores have been transcribed and desired scores might not be found. Therefore, those who need the score are required to listen to the performance and to transcribe onto a musical notation. When transcribing a musical note, several musical abilities such as absolute pitch and relative pitch are required, and it causes difficulty for persons with little music experience. From these backgrounds, studies on "Automatic transcription" aimed at automatic transcribing of the acoustic waveform of the performance have been continued. However, many of the studies have focused on the sound of music from the mid-tone range to high-tone range for instruments such as piano and guitar, and studies on low-tone range sounds such as electric bass guitar playing are scarcely seen. This is because the frequency analysis with a high frequency resolution and time resolution for transcribing onto a musical note were required, but it has been difficult to analyze performed sound of low-tone range in Fourier analysis used in studies until recently.
Therefore, in this study a final target was set to establish the automatic transcription system of a musical note of the electrical bass guitar solo performance by using the wavelet analysis known as one of the frequency analysis methods with the high frequency resolution and time resolution. To this aim, the estimation method of musical note was proposed as the first step. In this study, musical note is a structure composed of "Onset time", "Offset time", and "Fundamental frequency". Then, an envelope was calculated from the acoustic waveform and at the same time wavelet analysis gave a time-function of power at each frequency. Next, the noise was removed from the envelope and the time-function of power. Thereafter, the peaks of lowest frequencies were extracted from the time-function of power, and F0-continuation were detected. Finally, the notes were estimated by the envelope and the detected F0-continuation. In the evaluation of the proposed method, the sound performed with various ways of playing," Fingering ", "Picking", "Slapping", and "Muting", was analyzed, and the results were compared with data taken by hand labeling. Here, if the F-measure was employed as a performance indicator, the averages of F-measure for all results were about 0.70.
University of Technology Sydney, NSW, Australia
ABSTRACT
Sound amplification systems play a large role in the entertainment industry and in other areas that congregate people as an audience, such as congresses or in places of worship. The relations between sources, transducers, acoustics and perception are subject to research from many different disciplines. However, there appears to be little interest for this subject from the social sciences and humanities. This in sharp contrast with the interest these disciplines show in recorded music and sound. The impact the introduction of technological (mass) reproduction had on the distribution, reception and social function of music has been studied from many different angles. In my PhD research I examine the amplification of music in the light of a range of approaches including, semiotics, aesthetics, sociology and (systematic) musicology, in order to construct a theoretical framework. Such a social framework, could help open up this technology to its non-specialist users such as musicians or concert organisers. Because of the inherently technological nature of sound amplification and its evident relation to acoustics I would like to present my findings to encourage debate and ensure relevance. In this paper I focus on the relation between the amplification level and the function of that amplification. Composer and scholar Simon Emmerson was one of the first authors to venture on a systematic approach to the subject and gives an overview of different functions of the use of amplification in music. By making a matrix of these functions and (subjective) amplification levels, interrelated factors can be made accessible. For instance the flexible role and responsibilities of the sound engineer depending on the size of a venue and the amplification level. Another interesting possible parameter in such a matrix comes from perception and behavioural psychology; "social distance" provides information about the nature and context of a (inter human) communication. This information is usually preserved in the case of amplification (or recording for that matter). The approach described in this paper will allow non-technological stake-holders (like managerial staff at concert venues) or social scientists studying the reception of music, insight in the complex relations between music and amplification technology. Such an insight could ultimately provide such stake-holders with a voice in amplification policies and, as such, sound system design.
(1) Osaka Institute of Technology, Osaka, Japan (2) Osaka College of Music, Osaka, Japan
ABSTRACT
The "aha experience" means flashing caused in the process of a creative problem solving. Moreover, it means a mental experience which arises momentarily at the time of obtaining the insight. Recently, a variety of research was conducted about the "aha experience" from the viewpoint of the brain science. However, most of the research concerns the cognition in learning and of a visual image, but the research on the "aha experience" in the auditory cognition that uses music or sound materials is hardly done. In the present study, the "aha experience" was achieved by using a "hidden melody" which was composed based on the melody known widely. A listener can experience the "aha experience" when he finds an original melody in a hidden melody. Eight university students agreed to participate in this experiment. The changes in brain activity with EEG(electroencephalogram) during the "aha experience" were examined by measuring a momentary brain wave at the time of flashing. The result shows that the Alpha wave in the frontal lobe increases at the moment of the "aha experience".
Tokyo University of Information Sciences, Tokyo, Japan
ABSTRACT
The pitches produced by toy pianos are sometimes perceived to be inaccurate by listeners, some of whom report perceiving the perfect fifth above the nominal note of the pressed key. To investigate these assertions, the overtone frequencies of a toy piano were measured in a frequency range of no more than the eighth harmonic and below 5 kHz, which are considered to be important to the pitch perception of the human auditory system. Time-frequency and time-intensity analyses of the overtones revealed periodic variation in frequency and amplitude, which might be caused by two close vibrational modes. The pitch of a toy piano was found to be tuned to the frequencies of the overtones corresponding the third and fifth harmonics. No fundamental frequency component was observed. The
overtone of 1.5 times the missing fundamental frequency appeared above G4. In addition, the overtone of 0.5 times the missing fundamental frequency appeared above G5. The sounds consisting of the prominent overtones corresponding to the 1.5th and 3rd harmonics can be perceived as the perfect fifth above the nominal note. The pitch of the toy piano was perceived as inaccurate in part because the frequencies of the overtones corresponding to the third and fifth harmonics deviated by --4 to +24 cents and +3 to +33 cents from equal temperament, respectively.
School of Mechanical Engineering, The University of Western Australia, Crawley WA, 6009, Australia
ABSTRACT
This paper presents the measured spectra and identified pitch of thirteen tubular tower bells, which are made of thick-wall brass cylinders. Results show that bell's natural frequencies that dominating its pitch and timbre are well pre-dicted by Flugge's formula for thin-wall cylinders. The spectra of tubular tower bells and thin-wall pipe bells are compared and their tonal differences are attributed to the differences in their modal masses and decay times. The identified pitches of the strike tones of the tubular tower bells are analysed against the frequency distribution and am-plitude ratios of the partials in the spectra. The result supports the previous work on the virtual pitches of pipe bells and their determination by the "octave rule". Using this result, we offer an explanation to the question of why both tubular tower bells and pipe bells have similar tonal characteristics of church bells.
Seoul National University, Seoul, Korea
ABSTRACT
This paper proposes an effective method for the automatic music transcription in polyphonic music. The method consists of a combination of Non-negative Matrix Factorization (NMF), subharmonic summation method and onset detection algorithm. We decompose the magnitude spectrum of a music signal into the spectral components and the temporal information of every note using NMF. Then, the accurate pitch of each note is calculated from the decomposed frequency components based on the subharmonic summation method. And an algorithm for detecting the onset is applied for estimating the temporal information of a musical note. Our method is simple and has a low computational cost, because the method is not a note training-based. The previous researches using NMF detect the pitch and the time duration manually', therefore the previous methods are difficult to use in the real engineering. Our proposed method improved this problem with automatically' detecting the fundamental frequency and the rhythm component. Furthermore, the proposed method automatically performed the indexing of the musical note to be more used in the real engineering field. The transcription performance is evaluated with recorded polyphonic music signals, and the performance of the proposed method is better than the conventional NMF based methods in estimating both frequency component and time duration information.
(1) GIPSA-lab/Grenoble-INP, France (2) Université Paris Diderot, France
ABSTRACT
We present a system for under-determined source separation of non-stationary audio signals from a stereo 2-channel linear instantaneous mixture. This system is dedicated to isolate the different instruments/voices of a piece of music, so that an end-user can separately manipulate those source signals (for real-time remixing of music). The problem is addressed with a specific informed approach, proposed in previous papers, and implemented with a coder corresponding to the step of music production, e.g. recording/mixing in studio, and a separate decoder corresponding to the step of signal restitution, e.g. audio-CD at home. At the coder, source signals are assumed to be available, and are used to i) generate the stereo 2-channel mix signal, and ii) extract distinctive features that are afterwards embedded into the mix signal using an inaudible watermarking technique. At the decoder, extracting and exploiting the watermark from the transmitted mix signal enables an end-user who has no direct access to the original sources to separate these sources from the mix signal. In previous works, we proposed two drastically different techniques to implement the informed source separation principle. We first proposed a joint "sources-channel coding" approach where codebooks of time-frequency (TF) coefficients matrix prototypes were generated and used to encode the source signals. The resulting codes were watermarked into the mixture signals. In a further work, we proposed a technique that exploited audio signal natural sparseness in the TF domain, to carry out source separation by selection of locally most predominant sources followed by local inversion of the mixture system. In this case, the indexes of selected sources were watermarked. In the present study, we propose a new hybrid system that merges the two techniques: A subset of the source signals are encoded with matrix prototypes, and another subset are selected for local inversion. The respective codes and indexes are transmitted to the decoder by a new high-capacity watermarking technique (that is submitted to the same congress in a different paper). At the decoder, the first sources are decoded (hence estimated) and then subtracted from the mixture signal, before local inversion of the resulting sub-mixture signal leads to the estimation of the second subset of sources. This hybrid separation technique enables to efficiently combine the advantages of both coding and inversion approaches. Up to 6 different source signals can be separated from stereo mixtures, with a remarkable quality, enabling separate manipulation during music restitution.
GIPSA-lab/Grenoble-INP, France
ABSTRACT
Watermarking is a technique that consists in imperceptibly hiding/embedding binary information within a signal. In the present context of audio signals, this means that the watermarking process must be inaudible. Watermarking appeared in the early 90's and was first used for the protection of digital works as part of the DRM (Digital Rights Management). In this context, much more efforts have been devoted to ensure robustness of watermarks against attacks aiming at neutralizing it than to improve the quantity of watermarked information - which is usually within the range of tens of bits per second for audio signals in secure applications. Nowadays, audio watermarking can be used for other kind of applications, and in particular for metadata transmission. However, bitrates are usually still quite low, although such applications require extended bitrates balanced with lower robustness. In this study we propose a high-capacity watermarking technique for audio signals. This technique is suitable for many uncompressed audio signals, more particularly for 16-bits PCM signals as widely used in audio-CD and wav formats. The proposed technique is based on the application of the QIM (Quantization Index Modulation) technique on the MDCT (Modified Discrete Cosine Transform) coefficients of the signal. The underlying basic principle is that if those coefficients can be significantly modified by quantization in audio compression schemes such as MPEG MP3/AAC without quality impairments, they can also be modified to embed watermark codes. Still following inspiration from audio compression, a psychoacoustic model (PAM) is used at the watermark encoder to take into consideration the behaviour of the human auditory system and match the inaudibility constraint. The PAM is used to estimate an optimal watermarking capacity for each sub-band of each MDCT frame. The resulting capacity values are transmitted as part of the side-information to the decoder (so that the decoder can retrieve the watermark in the corresponding sub-band). For this aim, specific fixed capacities are allotted in the higher sub-band of the spectrum. With this technique, maximal bitrates of about 250kbits/s per audio channel can be reached (depending on the audio content), at the expense of robustness: The system can be used for "non-secure" applications where the signal does not suffer any attack other than quantization for uncompressed format conversion. For instance, we use this technique in a watermark-informed source separation system submitted to the same congress.
(1) University of Tasmania, Australia (2) CSIRO - Clayton Laboratories, Melbourne, Australia
ABSTRACT
Tonewood is the term employed to describe the wood species used to make musical instruments. These species have a proven record of consistent mechanical and acoustic qualities. Existing data show that a number of Tasmanian tonewood species have been used by luthiers over the past 25 years with varying success. Violin makers in particular, having to carve the instruments' plates from a solid block of wood, have found these species to be quite unpredictable when compared to the traditional, and reliable, European resonance spruce and curly maple. The aim of this paper is to discuss the acoustic qualities of 6 Tasmanian tonewood species which are not currently used in violin making, and the remarkable differences found between some Tasmanian species and some European species. In order to gather further information on Tasmanian tonewood, 30 specimens of wood from 6 species currently used by luthiers for guitars and other instruments were sourced; 175 test samples were then cut off the mother boards. These samples were used to conduct a series of acoustic tests for acoustical characterization of specimens. A new criterion to study the anisotropy of resonance woods was proposed (the ratio between the acoustic radiation and acoustic impedance). The most effective combination of species seems to be for the top Huon or King Billy Pine and Beech Myrtle for the back.
MUSICOS Research Centre, University of Genoa, Italy
ABSTRACT
The paper attempts to give a contribution to structural and dynamic analyses of musical instruments, through co-modelling procedures based on numerical standard codes. Particular reference is made to stringed instruments.
Acoustic performances of musical instruments are strictly related to their mechanical, structural, dynamic and vibratory responses. Numerical advanced procedures of mechanical, structural and dynamic analysis and diagnosis are often unknown to lute makers, developing their activity on own experience, based on practical knowledge and empirical tests. The availability on the market of powerful CAD and FE codes are usefully exploited, proposing a low cost co-modelling approach able to support the construction phases of high quality handcrafts products. The proposed co-modelling approach virtually emulates the construction phases followed by a lute maker during its job: numerical geometrical and modal models having different level of accuracy are generated step-by-step. Structural and dynamic simulated results can be compared to experimental procedures and suggests interactive modification of geometries or parameters. 3D general purpose CAD codes (Pro/Engineer by PTC and AutoCAD by AutoDesk) are applied. Geometrical description of different parts of the violin corresponds to the various phases followed by lute makers during their craft construction, from the cut of external profiles of soundboard and back up to the realization of their final surfaces. The approach is organized on generation of surfaces sets, compatible to the description of corresponding finite shell elements: it makes possible the parametric description of variable thickness not only between elements but also within each element.
The finite elements model is generated in general purpose structural analysis code (ANSYS) importing the CAD geometry and applying elements, having membrane and flexional attributes. Meshing requires suitable compromises. Discontinuities (e.g. f holes on the soundboard) are solved through original procedures able to automatically adjust the element size as function of the different local areas involved. Mechanical characteristics (e.g. Young's, shear and bulk moduli, Poisson's ratio, density and thickness) can be defined parametrically, for each elements or group of elements. Results concern the evaluation of natural frequencies and the modal shapes of the whole instrument or of parts of them. The interactive approach allows adjusting the wood characteristics or local thickness and simulating the dynamic result of this variation: but also to go back and modify the 3D geometry and simulate again. Main performances and features of the proposed co-modelling approach are practically described in the paper with reference to a specific violin.
(1) MUSICOS Centre of Research, University of Genoa, Italy (2) DIMEC, University of Genoa, Italy
ABSTRACT
A methodical experimental procedure able to identify on field vibration and acoustic performances of organ pipes has been implemented, tested and discussed in the paper. In particular experimental activities developed during the restoration of the most ancient pipes organ in the region of Liguria (Italy) are presented. Ancient organs are unique pieces conceived, designed and crafted within a specific context, to fulfil the needs of the community to which was originally destined. Taking into account their intrinsic value and quality, restoration procedures are not comparable to those of construction of new instruments. The aim of restorer is to render the instrument a harmonised whole, respecting, as much as possible, original acoustic features and characteristics initially conceived. Theoretical fluid-dynamic modelling and in particular pipe scaling (defining the science of measuring and deciding upon pipe diameters) are fundamental references but the actual physical, chemical end mechanical status of surfaces and materials can strongly modify the acoustic responses: consequently mechanical, structural and dynamic performances must be experimentally detected.
The organ of the Oratory N.S. del Suffragio (by G. Giovannini, 1686), located in S. Margherita Ligure, near to Genoa (Italy), has been fully restored in 2009. It has been submitted to maintenance works in 19th and 20th centuries, without modification of the original mechanical and acoustic components (50 notes fingerboard, 550 pipes, 15 pedals, 11 stops). Flue pipes have no moving parts and generate their sound by vibrating air in a column like a flute or recorder. Combinations of open pipes and stopped (or closed) pipes built with different materials are involved. Experimental approach is oriented to detect and compare vibratory and acoustic frequency responses of single pipes: measurements are implemented in field, using portable multi-channel instrumentation and equipping pipes with external microphones and arrays of micro-accelerometers mounted by means bee wax. Excitation is generated by impact with instrumented micro-hammer in structural tests and by played note in vibratory and acoustical experiments. Sound quality of specific pipes are related to the frequency coupling of structural and acoustic response. Further comparisons with experiments performed in laboratory of corresponding pipes submitted to acoustic holography approaches are focussed and discussed.
(1) Graduate School of Science and Technology, Ryukoku University, Shiga, JAPAN (2) Faculty of Science and Technology, Ryukoku University, Shiga, JAPAN
ABSTRACT
Recently more and more people want to play the piano, but limited time and financial resources mean that many cannot be taught by experts. Therefore, support systems for self-learners have been developed. However, the previous studies are only for the MIDI piano, and the acoustic piano is difficult to use with them. We propose a method of converting an acoustical signal of the piano into MIDI information. Our method is used to estimate the onset, velocity, and duration for each note from the acoustical signal, and an estimation system for proficiency using the notes. Usually the acoustic power of the piano has many peaks, the points of the highest ones are difficult to estimate, so estimating onset using only time information is difficult. Therefore, our system uses information on both time and frequency to estimate onset. For information on frequency, the system has a judging function with a correlation coefficient between consecutive pitch class profiles (PCPs), where PCP means pitch information with blind respect to the difference of octave. We expect the correlation coefficient using PCP to be able to detect the onset in accordance with the change of sounding pitches on keyboards that are consecutively pressed. For information on time, the system calculates the envelope. After the system has calculated a moving average for the obtained envelope, we estimate its peak time. Thus, the proposed method obtains the onset when a value is judged as high around the peak time of moving average to be the onset. On estimating velocity, the system estimates the height of the peak point of waveform in the range of the sound's duration. The peak height of a point of waveform is compared with the acoustic power that corresponds to the velocity values of recorded MIDI information. To determine the velocity value of input sound, the nearest value is selected. On estimating the duration, when the power of PCP attenuates is regarded as the offset time. The interval by subtracting onset time from the offset time is regarded as duration. An experiment in which a performance of scales was recorded shows that our method performs well. A score of proficiency produced by the proposed method is compared with those given by experts of the piano. As they strongly correlate (r=.58(n=336)), our system is confirmed to be effective.
(1) Graduate school of Science and Technology, Ryukoku University, Shiga, Japan (2) Faculty of Science and Technology, Ryukoku University, Shiga, Japan
ABSTRACT
A slideshow is an image display in which music excerpts are often played while photographs are switched. Producing a slideshow manually requires several time-consuming steps, such as individually setting the time of each photograph switch and therefore many systems that can generate slideshows automatically have been developed. Although these systems switch photographs at a constant frequency, it is not clear whether the switching time of the photograph is ideal for viewers. We investigated the appropriate time for switching photographs and used the data to develop a new system. We compared the cut times in several cartoon videos containing musical elements, and results showed that the appropriate switch time should be the beat times of the music excerpts. It was also demonstrated that the activity of cartoon films that contain frequent cutting or objects in rapid motion was relevant to the time function of flux in the musical excerpts, where flux is an index of spectral change in the acoustic signal. We used these result to develop a system with three distinct functions. First, it estimates the beat time based on periodic times with high acoustic levels. Second, it obtains the times when the local flux tendency is relatively high by consulting the gradient time function of flux. These times are assumed as the beginning of the musical phrases in the excerpt because significant changes to the spectrum indicated by flux represents significant changes to the content such as the beginning of a musical phrase. Third, it evaluates the mean value of flux between the times of high acoustic levels on the gradient time function of flux, which enables the activity of the current phrase to be obtained.
In summary: the photographs are switched on the beat time of the music excerpt, and the time intervals of the photograph switches are relevant to the activity of the musical phrases. We evaluated the performance of the proposed system with two tests. In one, the timing of the photograph switches were adequately synchronized with the viewers judgment of the beat. In the other, the subjective activity of a cartoon video was confirmed as identical to that of a slideshow generated by using the music excerpt of the original video.
(1) Institute of Acoustics, A. Mickiewicz University, Poznań, Poland (2) Faculty of String Instruments, Harp, Guitar and Violin - Making, I.J. Paderewski Academy of Music, Poznań, Poland (3) Institute of Experimental Physics, Gdańsk University, Gdańsk, Poland (4) Department of Engineering and Natural Science, Merseburg University of Applied Science, Merseburg, Germany
ABSTRACT
The main aim of the presented paper is to show differences between two guitars in their natural frequencies, modal damping and mode shapes, one of the instruments in armed and one in non-armed state. Both instruments were made almost identically. The only intentionally introduced difference was a in the bracing patterns of their front plates, being similar to traditional symmetric shape introduced by Antonio de Torres in one case and unsymmetric in the other one. The intention of the modification was to improve the sound of the instrument in the low frequency range.
Two experiments were performed: (i) mechanical modal analysis (version with a fixed response point) of the front plates and (ii) optical measurements of plate velocities in those modal frequencies found in the first experiment using a scanning laser Doppler velocimeter. Both experiments were performed on instruments with and without arming, respectively. Thus the so evolution of their vibrational behaviour along succeeding construction phases could be observed and evaluated. Our results provide a better insight into the guitar mechanics and sound radiation allowing the improvement of design and acoustic quality of the instruments.
(1) RMIT University, Melbourne, Australia (2) CSIRO ICT Centre, Australia
ABSTRACT
An onset detection system that exploits MPEG-7 audio descriptors is proposed in this paper, with investigations into the feasibility of MPEG-7 based onset detection performed across a diverse database of music. Detection functions were developed from both individual MPEG-7 descriptors and combinations of descriptors (joint detection functions). The results indicated that individual descriptors could achieve respectable detection performance (maximum F-measure of 0.753) with basic waveform features. Average detection performance could be improved by up to 11.2%, however, when joint detection functions were comprised of diverse combinations of MPEG-7 descriptors. This may be attributed to the increased capability of detection functions, composed of different spectral and temporal features, in capturing the variation in onset characteristics from different musical styles. It is thus concluded that the proposed onset detection system could be plausibly integrated into an existing MPEG-7 audio analysis system with minimal computational overhead.
(1) Physics Laboratories, Kyushu Institute of Technology, Iizuka, Fukuoka, Japan (2) Research Institute for Information Technology, Kyushu University, Higashi-ku, Fukuoka, Japan
ABSTRACT
Edge tones are acoustic fluctuations generated by the oscillation of a jet emanating from flue and collided with an edge. The study of edge tones has a long history and many authors have contributed to this problem. It is considered that some feedback mechanism, fluid and/or acoustic feedback, sustains the jet oscillation whose frequency mainly determines the frequency of the edge tone emitted by aerodynamic sound sources, so-called Lighthill's source. However, the detail mechanism of the edge tone is still not understood completely.
The aim of our study is to specify positions of the sound sources and to clarify how they are created in turbulence and how the sound is emitted from them, in terms of the aerodynamic sound theory. For the first step, we numerically reproduce the jet oscillation as a sound source and the edge tones as a product simultaneously for 2D and 3D models with compressible Large-eddy Simulations. In previous work we succeeded in reproducing sound vibrations of 2D and 3D air-reed instruments with a numerical scheme provided as a free software, OpenFOAM.
In this paper, we concentrate ourselves on a simple case of a symmetric edge without a resonator and calculate edge tones for 2D and 3D models with changing jet velocity. Lighthill's sound sources are obtained numerically and their behavior is analyzed in statistical methods. Actually mutual correlations among the sound source and the sound field are calculated so as to examine details of interaction among them. With those results, we try to specify the most dominant area of sound sources distributing around the jet and the eddies behind the edge which are generated by collision of the jet with the edge.
We also compare Lighthill's sound source with the sound source of the vortex sound theory formulated by Howe. In the vortex sound theory, the sound wave is considered as propagation of fluctuation of the total enthalpy instead of the air pressure or air density. Thus, the formulae are different and so are the source terms. We will clarify the difference of source distribution between Lighthill's and Howe's formulae and will discuss why such a difference occurs.
(1) University of Toulouse PHASE UPS, France (2) Henri Selmer Paris, France (3) ESPCI Physico Chimie des Polymères et Milieux disperses, France
ABSTRACT
African blackwood and other rare species of wood, used for centuries by instrument makers may in the near future be protected or of decreasing quality. More basic species cannot be used to built high quality instruments for mechanical and acoustical reasons. We show that it is possible to modify these woods in order to built woodwinds. We compare their mechanical properties, obtained through ultrasonic measurements, with the constraints of the size of the samples, with the mechanical properties of high quality african blackwood samples. The experimental set up compatible with the samples used by instrument makers will be described, the rationale that leads to the elastic constants presented and the results for unmodified wood, modified wood and African blackwood samples detailed We show that our modifications give to the modified wood mechanical properties comparable to that of the African Blackwood giving a wood alternative to the use of rare wood species.
(1) Department of Physics, Nagpur University, Nagpur, India (2) Department of Electronics, Nagpur University, Nagpur, India (3) Laboratory of Acoustics, Faculty of Engineering, University of Porto, Porto, Portugal
ABSTRACT
The human whistle is a representation of the human vocal singing. Singing (solo and congregational) is an essential component of sacred music for collective worship in a Catholic church. The acoustic characterization of sacred music is defined in this paper through a derived Acoustic Comfort Impression Index (ACII) and several Acoustic Worship Indices (AWI), namely, Subjective Sacred Factor (SSaF), Subjective Intelligibility Factor (SInF) and Subjective Silence Factor (SSiF). In this study, live sacred music rendered by the human whistle is compared with that by the cello, clarinet, violins and the ensemble, in the Catholic church of the Divine Providence (Goa, India). Among the significant results, ACII for the human whistle was found to be better than ACII for the musical instruments (F = 2.38, p = 0.08); this difference was more significant at the nave of the church (music source) (F = 2.94, p = 0.04) and lower at the choir loft (music source) (p = 0.21). SInF for the ensemble music was found better than SInF for human whistle (F = 3.07, p = 0.03). At the nave of the church, the SInF was found better than SSaF and SSiF (F = 4.17, p = 0.02). SSaF and SInF were equally better than SSiF at the choir loft (p = 0.02). This study opens the possibility of optimized use of the human whistle in rendering sacred music in a church.
School of Electrical and Information Engineering, The University of Sydney, NSW, Australia
ABSTRACT
A simplified auditory model has been used for calculating an enhanced summary auto-correlation or ESACF, which can be used as a tool for musical pitch estimation from audio signal. The model itself is not only computationally efficient but its ESACF also shows a good result for single pitch estimation. However, using this ESACF for multiple pitch estimation seems to be very difficult to analyse because musical instruments usually have timbre variations even for the same kinds of musical instruments. By modifying this model, we can generate input features to use with neural network for assisting the process of multiple pitch estimation. Thus, each output of the neural network is mapped to each musical pitch and used to indicate each existing pitch probability. In our experiments, we generated data sets from recording of real musical instruments and used these data sets to train neural network and evaluate its performance. We compare performances of neural network between using of these proposed features and spectral features generated from audio spectrum. From the results, we found that the performances from these proposed features can be comparable with the features generated from audio spectrum and some experiments illustrated that these features yield better performances for musical instrument signals with slightly changes in their timbres
(1) Graduate School of Engineering, Kanazawa Institute of Technology, Japan (2) Sockets Inc., Japan
ABSTRACT
Several researchers have illustrated musical emotion by using two-dimensional models. These models suggest that the space of musical emotion is spanned by valence and arousal dimensions, or, cheerfulness and tension dimensions. In the present study, a perceptual experiment was conducted to estimate the effects of tempo, sound level and articulation (staccato-legato) on the perceptual degree of tension. The ascending major scale played by pure tones was used as stimuli. The performing register was fixed as possessing the overall term spectral centroid at 15.62 ERB-rate. The articulation value was defined as the duration of each tone divided by the inter-onset interval.
The experiment consisted of three sessions. In Session 1, the sound level was fixed at 83 dB (LAeq) and the articulation value was fixed at 1.0. The tempo was set at 70.7, 100.0, 141.4, 200.0, 282.9 and 400 BPM, respectively. Sheffé's paired comparison method was used for the stimuli: The six stimuli were paired and presented to the listeners. Twenty-one listeners were requested to compare the perceptual degree of tension of the former and latter stimuli, and rate it in seven-step categories. In Session 2, the sound level was set at 71, 77, 83, 89 and 95 dB respectively, with the tempo of 141.4 BPM. In addition to them, a stimulus which played in 282.9 BPM at 83 dB was used. The articulation value was fixed at 1.0 for the six stimuli. In Session 3, the articulation value varied from 0.2 to 1.0, in steps of 0.2. For these five stimuli, the tempo was fixed at 141.4 BPM. A stimulus with 282.9 BPM and the articulation value of 1.0 was also used. The sound level was fixed at 83 dB for the six stimuli. Each session included two identical stimuli, played at 141.4 and 282.9 BPM respectively, both at 83 dB with the articulation value of 1.0. Using the psychological distance between the two stimuli in the tension, all stimuli used in the three sessions were plotted in the tension scale, quantitatively. The results showed that the degree of the perceptual tension increased proportionally with the logarithmic value of the BPM value. Moreover, the tension increase when doubled the tempo from 141.4 to 282.9 BPM with the articulation value of 1.0, was almost equal to the tension increase when the articulation value changed from 1.0 to 0.4.
Graduate School of Engineering, Kanazawa Institute of Technology, Nonoichi, Japan
ABSTRACT
In recent years, the hardware and software of video games has been developing exponentially. This led to the rapid increase in cost and time for creating high-quality content for video games. It is also the same in the case of composing music for video games. Musicians and sound engineers create musical pieces for various scenes in the game based on their artistic sense and experience. Producers and directors also decide on a piece of music among various alternatives, based on their sense and experience, for a scene. Therefore it is strongly needed to establish a scientific logic to design music for video games, efficiently. As it is the first step for constructing the scientific design of game music, the emotion of game music is investigated in the present study. The emotion of music is described by various adjectives such as "exciting", "graceful", "happy", "dignified", etc. This implies that musical emotion is constructed by a multi-dimensional space. Musical psychologists illustrated the emotional space of music by two to eight dimensions. Among these studies, Hevner (1937, 1938) showed a simple two-dimensional model spanned by valence and activity. However, the musical materials used in these dimensional studies were limited to classical music.
Video games tend to use short pieces of music, repeatedly. Moreover, some of them do not have a clear melodic structure. Therefore, it has to be clarified whether the emotion of game music shows a different dimensional structure from classical music, or not. In the present study, the dimensional structure of emotion in game music was examined. 100 pieces of game music were collected from game sound track CDs. They were presented through headphones in 68-82 dB (LAeq). Seven listeners were requested to rate the emotional features using 29 seven-step bipolar scales for each piece. The scores from the listeners were averaged and used in principal component analysis. The results showed that the two-dimensional space accounts for 78 % of data variance. The two dimensions were labeled "brightness" and "excitability," respectively, after the scales having high loadings on the dimensions. The brightness and excitability corresponds well to the two dimensions of valence and activity which illustrates classical music. The results show that the dimensional structure of the emotion in game music is consistent to that of classical music.
University of Applied Sciences, Hamburg, Germany
ABSTRACT
In this paper, a semi-virtual violin is presented which has been developed in the context of a research project on desirable violin sound properties. Since the timbre and the reverberation characteristics of a violin are primarily determined by the nature of the resonance body, the main component of the platform is a modifiable virtual body. The method used here focuses on the musicians' perception of spectral components rather than on physical modeling properties. A silent violin which has been designed with particular emphasis on authentic haptic and virtual properties is used as interface between musician and virtual body. Binaural transfer functions of real violins measured at the violinist's hearing position serve as initial sound references to start from for further spectral modifications. A specific filtering technique enables highly-detailed modifications in the frequency domain, changing individual resonances or resonance areas while leaving other resonances unaffected. Implementation on an external signal processor provides for real-time sound processing. Due to an overall system latency of less than 5 ms, the platform allows for experiments on perceived sound properties and human-instrument interaction together with musicians. An example of application is given: The presented tool is used, inter alia, to manipulate the vowel quality in violin tones by specifically changing formant properties, since, in concurrent research work, the authors seek for a relationship between perceptible vowel properties in violin tones and the quality of instruments.
University of Applied Sciences, Hamburg, Germany
ABSTRACT
Measuring body impulse responses of stringed instruments is one of the basic issues in musical acoustics and is repeatedly approached with new methods. Presuming that the resonator is a linear system, its transfer function offers a lot of information on the instrument's timbre, reverberation and directional radiation properties. Impulse responses are used as starting point for modeling approaches for example, or to investigate the relationship between particular resonance constellations and the instruments' quality. The most common method for impulse response measurements is to excite an instrument at the side of the bridge using an impulse hammer, a shaker, or a Dünnwald exciter.
This paper suggests using an alternative, simple and individually adjustable technique which delivers highly reproducible responses. The method bases on exciting the dampened strings at the bowing or plucking position by means of a thin copper wire which is pulled until it breaks. Taking into account the longitudinal and torsional movements of a bridge caused by string deflections, the stimulus of the body is much closer to the musical application. Since the geometric configuration of the measurement setup can be exactly specified, the proposed method allows for highly accurate repetition in comparative studies. In the paper, the setup including a fully automated exciting apparatus as well as a 'silent' quadrochord is described in detail. In addition, since the method is developed in the context of a research project on violin sound quality, an application is presented, where the technique is used to measure binaural impulse responses of violins.
School of Physics, The University of New South Wales, Sydney 2052, Australia
ABSTRACT
In many wind instruments, a non-linear element (the reed or the player's lips) is loaded by a downstream duct - the bore of the instrument - and an upstream one - the player's vocal tract. Both behave nearly linearly. In a simple model due to Arthur Benade, the bore and tract are in series and this combination is in parallel with the impedance associated with vibration of the reed or player's lips. A recent theme for our research team has been measuring the impedance in the mouth during performance. This is an interesting challenge, because the sound level inside the mouth is tens of dB larger than the broad band signal used to measure the tract impedance. We have investigated the regimes where all three impedances have important roles in determining the playing frequency or the sound spectrum. This talk, illustrated with demonstrations, presents some highlights of that work, looking at several different instruments. First order models of the bore of flutes, clarinets and oboes - the Physics 101 picture - are well known and used as metaphors beyond acoustics. Of course, they are not simple cylinders and cones, so we briefly review some of the more interesting features of more realistic models before relating performance features and instrument quality to features of the input impedance spectrum. Acousticians and sometimes musicians have debated whether the upstream duct, the vocal tract, is important. Setting aside flute-like instruments, the bore resonances near which instruments usually operate have high impedance (tens of MPa.s.m-3 or more) so the first order model of the tract is a short circuit that has no effect on the series combination. In this country, that model is quickly discarded: In the didjeridu, rhythmically varying formants in the output sound, produced by changing geometries in the mouth, are a dominant musical feature. Here, the impedance peaks in the tract inhibit flow through the lips. Each produces a minimum in the radiated spectrum, so the formants we hear are the spectral bands falling between the impedance peaks. Heterodyne tones produced by simultaneous vibration of lips and vocal folds are another interesting feature. In other wind instruments, vocal tract effects are sometimes musically important: as well as affecting tone quality, the vocal tract can sometimes dominate the series combination and select the operating frequency, a situation used in various wind instruments. In brass instruments, it may be important in determining pitch and timbre. Saxophonists need it to play the altissimo register, and clarinettists use it to achieve the glissandi and pitch bending in, for example, Rhapsody in Blue or klezmer playing.
Graduate School of Engineering, Kanazawa Institute of Technology, Nonoichi, Japan
ABSTRACT
It is often said that music affects the impression of our environment and also our behavior. The interaction between the impression of visual information and music has been investigated in various contexts; music videos, television commercials, films, car audio, as well as computer graphics with music. However, the effect of music on our behavior has rarely been investigated with scientific experiments.
The author's group previously investigated the effect of music on the performance and impression in the virtual environmental context of video games; racing games and Pachi-slot games (slot-machine type games). The results commonly showed that the "no music" condition provided the best performances, and that a musical excerpt which provided a high degree of "unpleasant" impression to the games resulted in a high degree of negative effect on the performances. The results can be interpreted as follows: The musical excerpts which provided an unpleasant impression to the games decreased the players' concentration.
In the present study, we examined the effect of music on the performance and impression in the situation of simple repetitive operation. We provided a repetitive calculation task based on the Uchida- Kraepelin test. In this task, the following trials were repeated for 10 min: Two single-figure numbers were displayed on the computer screen, then a participant added the two numbers and inputted the number of the last figure of the result. The experiment consisted of eleven conditions. In each condition, the task was repeated three times with one-min rest periods. In one condition, no music was presented. In each of the other conditions, one of the ten musical excerpts was presented through headphones. The set of the ten excerpts were identical to the set used in the previous experiments for games. Eleven university students participated in the experiment. After each condition, the participants evaluated the impression of the calculation task under that condition using 17 semantic differential scales. In the other session, the participants listened to each excerpt without the calculation task and evaluated the impression of the excerpt using 18 scales. The correct percentage of answers did not vary with the conditions, significantly. However the numbers of answers varied significantly. The "no music" condition resulted in the best performance (number of the answers). The principal component analyses and the multi-regression analyses showed that a musical excerpt which provides an "unpleasant" and "dark" impression to the task decreased the performance.
(1) Department of Physics Education, Seoul National University, Korea (2) CCRMA, Department of Music, Stanford University, USA
ABSTRACT
In Korea, 262 bell chimes are inherited from Choseon Dynasty (AD. 1392-1910). Even replicas of bell chimes are produced recently, but tuning methods are not established yet. To figure out the acoustical characteristics of old bell chimes, we analyzed 261 bell sounds and 31 bells among them were subject tested by 5 traditional musicians to figure out relevant parameters. One of old bells' external size was measured by a 3D measuring machine and it's vibrational modes were analyzed by FEM method.
Music and Musicology Research Unit, Unidade de Investigação em Música e Musicologia (UnIMeM), Portugal
ABSTRACT
I was solicited to conceive an organ similar to the Zadar in Croatia, to be installed in Douro estuary. The Zadar Organ is powered by air pumped through the sea waves into the natural shore cavities. However, in Douro river estuary, the tide is the only existing vertical flow. Due to that fact I propose as an alternative, the installation of a traditional concert organ, fed by air pressured by the water rise during the flood tide, which exempts the need of waves. Description and operation of the device. An hydro-pneumatic chamber located either in a mole below water or out of it, in the ground, communicating with the estuary water body through a window or conduct whose top is level with the low tide. This chamber has a valve system located above the highest tide mark in order to control and deliver the pressured air to the organ(s) or to free it to the atmosphere. When one of the valves is open, the chamber fills or expels the air through the tide flow. When all the valves are closed the rise and fall of the tide respectively induce the increase or decrease of pressure in the interior of the chamber. Dimensions and potential. The determinant factors of the potential are: 1) the tide amplitude; 2) area of rising water surface, 3) the total volume of the chambers or the connected chambers.
Musical use. The device is capable of feeding either a concert or a street organ in two ways: 1) in real time during the high tide, 2) in deferred time by retaining the compressed air. Note, the control of the organs may be done by human hands, by automation through the internet or randomly. Keeping in mind that the potential source is constant, the possible duration of a concert depends on the organ register and the polyphonic density of the repertoire. Some non musical applications include, obtention of air flow during the tide movements, production of energy through an aerogenerator, drying and blowing applications.
Department of Biochemistry, Hebrew University-Hadassah Medical School, Jerusalem, Israel
ABSTRACT
Main cause of male infertility is the dramatic reduce in sperm quality. Infertile males semen contains a large number of apoptotic and deformed morphologically (e.g., perforated and asymmetric) spermatozoa. This leads to a failure in the fertilization outcome. Current sperm enrichment methods such as the double-density gradient are long, expensive and require trained workers. In this study we evaluate the combination of annexin-V magnetic particle (AVMP) with ultrasound standing wave (USW) separation technology in order to improve infertile-males sperm quality. Preliminary results suggest that by using an intensity of 10 W/cm2 and a laminar flow rate of the medium of 60 uL/min it is possible to trap only symmetric (both apoptotic and non-apoptotic) spermatozoa. Deformed-morphologically spermatozoa are more influenced by the viscous drag force caused by the medium, washing them out of the acoustic unit by the medium flow. Apoptotic symmetric-spermatozoa (ASS) are removed by adding AVMP to the trapped symmetric-spermatozoa and by increasing the flow rate of the medium to 500 uL/min. At these conditions, the AVMP-ASS complexes results in a larger acoustic radiation force then those of free symmetric-spermatozoa, keeping them at the acoustic unit. Free symmetric-spermatozoa, fitting to fertilization, washed out of the acoustic unit by the medium flow. Preliminary results also suggest that USW may applied to improve the outcome of in-vitro fertilization (IVF) by trapping both spermatozoa and ovum at the node of the USW, instead of relying on spermatozoa Brownian motion toward the ovum or injection of spermatozoon into the ovum. Although USW can be used for sperm enrichment and IVF, evaluation of the effect of ultrasound on the embryo development still needs to be examined.
University of Massachusetts, Dartmouth, USA
ABSTRACT
The Big Brown Bat (Eptesicus fuscus) uses Frequency Modulated (FM) echolocation calls to accurately estimate range and resolve closely spaced objects. Recent work by Fontaine and Peremans have shown that a sparse represen-tation model for bat echolocation calls facilitates distinguishing objects spaced as closely as 2 micro-seconds in time-delay and was also robust to noise over a realistic range of signal to noise ratios (SNR). Fontaine and Peremans used the random FIR filter Compressive Sensing (CS) technique as their input method. Their study demonstrated that the undersampled data provided by the FIR filter output still contains sufficient information to accurately reconstruct and resolve sparse target signatures using L1 minimization techniques from CS. Their work raises the intriguing question as to whether under-sampled sensing approaches structured more like the bat's auditory system still contain the in-formation necessary for the hyper-resolution observed in behavioral tests. This research investigates the ability to es-timate sparse echo signatures using a downsampled filterbank for the sensing basis that is closer to a bat auditory sys-tem than randomized FIR filters. The returning echoes are sensed using a discrete-time constant-bandwidth filter bank followed by downsampling that loosely resembles the filtering and smoothing of the bat's cochlea. L1 minimiza-tion then reconstructs the sparse target return from this under-sampled signal. Initial simulations demonstrate that this filterbank CS model reconstructs sparse sonar targets with a high degree of accuracy while substantially undersam-pling the filter outputs. In addition, the overdecimated filterbank CS approach has better target resolution than the Matched Filter for SNR values ranging from 5-45 dB and has better detection performance than the Inverse Filter method. This is all accomplished while undersampling the return echo signal by as much as a factor of six. The de-terministic sensing basis has the distinct advantage over the random sensing basis in the respect that the circulant structure of the filterbank sensing matrix can easily be implemented in electric circuits.
Akashi National College of Technology, Akashi, Japan
ABSTRACT
Two longitudinal waves of “fast and slow waves” propagating through cancellous bone can be separately observed in the case of the trabecular structure oriented parallel to the wave propagation, but a single wave, in which the fast and slow waves completely overlap, can be observed in the case of the perpendicularly oriented structure. This can be considered to be because the propagation paths of the fast and slow waves in these cases are different. In this study, the changes in the propagation paths (or directions) with the angle of the trabecular orientation have been numerically investigated using finite-difference time-domain simulations with realistic cancellous bone models recon-structed from a three-dimensional microcomputed tomographic image. The simulated results suggested that both propagation paths changed owing to the oblique trabecular orientation, which could affect the structural de-pendences of the propagation properties.
(1) Department of Physics and Mathematics, University of Eastern Finland, Kuopio, Finland (2) Diagnostic Imaging Centre, Kuopio University Hospital, Kuopio, Finland
ABSTRACT
Osteoporosis is a major worldwide health concern causing growing amount of fractures annually. Backscatter parameters derived from pulse-echo ultrasound (PEUS) measurements have been shown to relate with the bone microstructure and composition. PEUS may be applied also at most critical fracture sites of the proximal femur by using the dual frequency ultrasound technique, capable to minimize measurement errors that arise from overlying soft tissues on bone. At the proximal femur the cortical layer is often too thin to be detected with traditional peak detection methods. In this study, cepstrum method was applied for determination of thin cortical layer thickness in numerical simulations and experiments. Ultrasound propagation in a water-cortical bone-fat construct was simulated (11 simulations, cortical bone thickness varied from 0.5 to 1.5 mm) with the Wave 2000 software (finite difference time domain method). The transducer operated at 5 MHz, was 10mm in diameter and had focal length at 30mm.
For in vitro experiments, 5 thin slices of bovine cortical bone (thickness 0.5mm - 2.5mm) from tibial shaft were cut with a low-speed diamond saw. Cortical-trabecular bone samples (n =4) were sawn from the epiphysis of bovine tibia. The cortical-trabecular samples were scanned laterally to determine the mean cortical thickness with the cepstrum technique.
Acoustic measurements were conducted by using a focused transducer with the centre frequency of 2.25 MHz and UltraPAC ultrasound pulser/receiver system controlled with Labview 8.2based software. Ultrasound signals were analyzed with Matlab. Cortical bone thickness, determined with the cepstrum technique, showed good agreement with the thickness of the cortical bone in simulation geometry (r = 1.0, n = 11, p < 0.001) or with bovine bone samples in vitro (r = 0.94, n = 9, p < 0.001). The accuracy of the cepstrum method, assessed as a mean absolute error, was 320microns in vitro and 34microns in simulations. In this study, the cepstrum analysis of ultrasound reflections from the cortical bone was found to provide a reasonable estimate of the thickness of thin cortical bone layers. This method may be applied for assessment of cortical thickness at the most severe fracture sites at the proximal femur where cortical layer is thin and trabecular bone is present under the cortical layer. Moreover, it may be used for compensation of cortical bone effect from ultrasound backscatter measurements in trabecular bone and could therefore provide more reliable diagnosis of osteoporosis with ultrasound techniques.
Department of Physics, Ryerson University, Toronto, Ontario, Canada
ABSTRACT
Low intensity pulsed ultrasound (LIPUS) has been shown to improve bone fracture healing in in vivo animal and human clinical studies. In vitro, this improvement has been shown through improved mineralization in bone cells. Low level heat of bone fractures has also been shown to improve healing. Moreover, low level heat has been shown to improve mineralization in bone cell cultures. This study examines the effect of concurrent LIPUS and heat on MC3T3-E1 bone cells. The research version of a clinical LIPUS device was used in this study. The bone cells were split into four treatment groups: LIPUS, heat, LIPUS + heat, and control. The LIPUS treatment was delivered with the intensity of ISATA=10 mW/cm2 at the frequency of f=1.5 MHz for 40 minutes each day over 15 days. The heat treatment was applied at 40ºC for 40 minutes each day over 15 days. The LIPUS + heat group received the treatments concurrently. Outside of heat treatment the cells were kept at 37 ºC. The groups were tested for calcium mineralization using the Alizarin red staining. The samples were compared using a spectrometer for light absorbance at wavelength of 405 nm. All treatment groups showed statistically significantly improved mineralization when compared to the control cell cultures. Although the LIPUS and LIPUS + heat groups each showed almost a 4 fold increase in mineralization over the control, there was no statistical difference in mineralization between these two groups. Early results suggest that concurrent heat and LIPUS on MC3T3-E1 bone cells has no additive effect on mineralization.
Laboratory of Ornithology, Cornell University, USA
ABSTRACT
Older males tend to have a competitive advantage over younger males in sexual selection.
Therefore it is expected that males communicate their age to potential mates and opponents for rapid assessment. Although song repertoire size in songbirds is often mentioned as an age-related trait, many species do not change their repertoire after the first year and therefore it cannot serve as an age indicator in these species. Here we show that the trill notes in the songs of older banded wrens are reproduced with less variability between them, i.e. more consistently. In a playback experiment we also showed that banded wrens discriminated between younger and older birds based on structural aspects of their song. In a second experiment banded wrens also responded differentially to natural songs versus songs with artificially enhanced consistency. We argue that consistency in trill note reproduction may be achieved through practice with the coordination of two independent sound sources, the left and right syringes. Sexual selection may therefore operate on a phenotypic trait, the expression of which is enhanced by practice.
(1) Centre de Neuroscience Paris-Sud, CNRS-UMR819, Université Paris XI, Orsay, France (2) School of Biological and Chemical Sciences, Queen Mary University of London, UK
ABSTRACT
In songbirds, songs are involved in mate attraction and male contest. When emitting these signals to defend their territories, males provide information to the receivers such as their species, group and individual identities. Each information is associated to a precise coding process. We have studied these different levels of coding in a songbird with a territorial flight song, the skylark Alauda arvensis. This bird is a species of open landscape in which pairs settle in stable and adjoining territories gathered in groups spaced by a few kilometres because of the heterogeneity of the habitat. The song consists of series of vocal units, named syllables, with an estimated repertoire of up to 700 different syllables per individual. This song is one of the most complexes among songbirds, giving rise to a huge potential for variation at the syntactic level. In spite of such syntax complexity, we demonstrated by playback experiments that species identity is encoded by simple temporal parameters: skylarks pay a great attention to the sound-silence rhythm. The syntax is not important and almost all syllable patterns elicit strong territorial responses.
To study the group identity, we first carried out a detailed syntactic analysis of songs and we showed that, in a given group, males (neighbours) share several sequences of syllables in their songs, whereas males settled in different groups (strangers) have no sequences in common. We tested group recognition by playing back natural and artificially modified songs. Results showed that, as in many birds, skylarks are less aggressive towards neighbours' than strangers' songs and that the syntax of sequences shared by neighbours is used by birds to discriminate group members from strangers. The ordering of syllables within these shared sequences is behaviourally salient and encodes the group identity. Finally, concerning the individual identity, we demonstrated by play-back experiments that birds are able to vocally discriminate their different neighbours. Acoustic analyses revealed that males could also potentially use a syntactical arrangement of syllables in some individualised sequences to identify the songs of their neighbours. In conclusion, skylarks use different song features or different ranges of the same feature: the rhythm to convey species identity and the syntax, but in different components of the signal, to convey both group and individual identities. These coding strategies are discussed in regards to their propagation properties.
(1) University of New South Wales (UNSW), School of Biological, Earth and Environmental Sciences, Sydney, Australia (2) Oregon State University, Hatfield Marine Science Center, Newport, OR, USA (3) Korea Polar Research Institute, Songdo Techno Park, Incheon, Korea
ABSTRACT
Quantitatively surveying the vast majority of marine mammal populations is problematic using traditional visual methods alone. However, many species frequently produce loud, characteristic, stereotyped, long-range calls. This unique acoustic signature, coupled with the efficient propagation of sound through the ocean, has resulted in acoustic techniques being used to estimate distribution, and more recently abundance, of marine mammal species. Passive-acoustic methods also offer enormous potential for improving estimates of site-occupancy change for marine mammal populations. The goal of this study is to evaluate whether passive-acoustic techniques can help identify driving factors behind changes in patterns of abundance and distribution. We use the leopard seal, Hydrurga leptonyx, an Antarctic pack-ice seal, to test this hypothesis. Their acoustic behaviour is highly stereotyped and the variability around the age-related, temporal and behavioural influences on their calling patterns have been well documented. This study uses a long-term passive acoustic dataset collected from the same location within the Bransfield Strait, Western Antarctic Peninsula between 2005 and 2010. The seasonal pattern in calling behaviour varied enormously during the 4-year recording period, potentially allowing us to identify which influences, behavioural and/or environmental, were driving these differences. By combining acoustic analysis, remote sensing, and GIS modelling, we examine whether physical-environmental data (sea ice and meteorological conditions etc.) and/or behavioural data (specifically age-cohort differences) are linked with the different patterns observed.
(1) Laboratoire d'Acoustique de l'Université du Maine - UMR CNRS 6613, Le Mans, France (2) Zoo de Beauval, Saint-Aignan, France
ABSTRACT
Elephants produce a broad range of sounds from very low frequency rumbles to higher frequency trumpets. Trumpets are produced by a forceful expulsion of air through the trunk. Trumpets are mainly tonal sounds. Elephants tend to trumpet when they are highly stimulated and the quality of trumpeting varies with the context. Some elephant trumpeting sounds are very similar to a trumpet or a trombone sound especially when playing "brassy". This kind of brassy sounds played at high level dynamic, are made of a lot harmonics as a consequence of the wave steepening in the bore. The wave steepening is a cumulative effect obtained during the nonlinear propagation along the internal bore. A parameter to judge the severity of the nonlinear steepening is the critical shock length distance associated to a given input pressure profile. When the length of the bore is comparable to this critical distance, highly distorted waves can be observed in the bore, and it is the case for brass instruments played at fortissimo level. The internal bore of the vocal system of the elephant, from the vocal folds to the open end radiating the sound - trunk end - is several meters long, like brass instruments. The vocal system is so long than the nonlinear steepening effect might be significant during elephant trumpeting. This hypothesis is discussed from elephant trumpet's signals, and estimated by comparison with human voice and brass musical instruments under playing conditions.
(1) Kagawa National College of Technology, Takamatsu, Kagawa, Japan (2) SoundID, Maleny, Queensland, Australia (3) School of Biological Sciences, Flinders University, Adelaide, Australia (4) School of Biological Sciences, Flinders University, Adelaide, Australia
ABSTRACT
We have developed an automatic classification system for bird vocalisations in a process encompassing the last seven years. In order to classify the bird vocalisations automatically, we use pattern matching and cluster analysis. The degree of likeness between two images of the sound spectrum pattern is numerically evaluated and the bird vocalisations are classified. In traditional cluster analysis, the similarity scales known as the Euclidean distance and cosine similarity are widely used to measure likeness. These methods do not perform well in the presence of noise or pattern distortions, and while good at finding close matches perform poorly when tasked with similarity searches.
As an improvement, we have developed a new similarity scale called the Geometric Distance (GD), which overcomes the limitations of the earlier models, while improving the overall classification accuracy. We have been developing the automatic classification software for the bird vocalisations (and other sounds) using the GD. As of early 2010 we have commercial software that extracts LPC spectrum patterns (frequency-power) from bird vocalisations (or other sounds), and classifies them using one-dimensional GD (1-d GD). This is being used extensively by researchers world-wide. This one-dimensional method, was derived from an earlier two-dimensional method, that we developed, but found was computationally too slow. Taking advantage of the faster processors available today we have now moved onto a method that extracts spectrograms (time-frequency-power) from the bird vocalisations, and processes them using two-dimensional GD (2-d GD). From experimental testing, we have found that 1-d GD performed significantly better than the Euclidean distance and cosine similarity, and that 2-d GD performed significantly better than 1-d GD. As anticipated it is also slower, but practical, and offers better than human expert discrimination. The GD method is inherently n-dimensional and processing time can be traded for accuracy. In this paper, we introduce the principles of the 2-d GD, demonstrate the two-dimensional pattern matching software, and describe design considerations for a new automatic classification system of the bird vocalisations using 2-d GD and cluster analysis. The method has nothing that restricts it to bird vocalisation analysis and can in principle be used for any sound classification with minor parameter changes.
(1) Department of Information Science, Tohoku Gakuin University, Japan (2) National Research Institute of Fisheries Engineering, Fisheries Research Agency, Japan (3) Furuno Electric Co. Ltd., Japan
ABSTRACT
Identification and classification of fish species are essential for acoustic surveys of fisheries. The echo from the fish contains components from multiple reflections, including those from the swim bladder, head and other organs, and can be used for the discrimination of fish species and the estimation of fish abundance. Therefore, it is necessary to clarify the relationship between these inner organs and temporal structures of the fish echo and clarify the characteristics of the temporal structures, which are useful for classification of fish species. By using the dolphin-like sound, the echoes were measured from anaesthetized fishes of three species (red seabream, Pagrus major; Japanese jack mackerel, Trachurus japonicus; Chub mackerel, Scomber japonicus) in an acoustic experimental tank (10 x 15 x 10 m). Both the temporal structures and echo duration were extracted from the fish echo using the cross-correlation function and the lowpass filter. It was shown that this extracted temporal structure matched well with X-ray images of the fish along the incident sound beam axis. The temporal structure and echo duration were changed dependent on the fish species and orientation. It was examined that fish species could be classified by the temporal structure under the assumption that fish orientation was known.
Department of Mechanical Engineering, University of Hawaii at Manoa, Honolulu, USA
ABSTRACT
Atherosclerosis, the cause of myocardial infarction, stroke, acute coronary syndromes and ischemic gangrene, is a multifaceted disease. Atherosclerotic lesions or atheromata consist of asymmetric focal thickenings of the intima. Rupture-prone lesions may remain undetected and upon manifestation expose pro-thrombotic material from the core of the plaque to the blood and thus, transforming the stable plaque into vulnerable, unstable that is likely to rupture, induce a thrombus, and elicit an acute coronary syndrome. Moreover, this type of lesions has been attributed to more than half of all acute myocardial infarction incidents. The current conventional imaging methods for the detection of atherosclerotic lesions are intravascular ultrasound (IVUS), magnetic resonance imaging (MRI) and computed tomography (CT). Although these techniques have proven useful in the clinical practice, significant limitations exist. IVUS has the potential to characterize atheromata only in the vicinity of the ultrasound catheter, MRI's long image-acquisition time hinders the consistent imaging of structures and CT lacks the ability to visualize rupture-prone, non-stenotic, lipid-rich lesions.
The ability to recognize specific biological markers that occur when rupture-prone atherosclerotic plaque develops in normal artery walls could aid in the detection and thus, facilitating an earlier diagnosis. This study investigates the in vitro detection of vulnerable plaque with targeted ultrasound contrast agents (UCAs). Scanning acoustic microscopy (SAM) at center frequencies of 50 MHz and 100 MHz was used for the quantification of mechanical properties of excised artery tissue sections. Targeted UCAs were conjugated to specific antibodies and allowed to bind to sites of interest. Prior to the acoustic and epifluorescence investigation, artery sections with a thickness of 50 m and 60 m were obtained using a Leica CM3050S cryostat (Leica, Bannockburn, IL, USA). Following the acquisition of the RF data with SAM for the quantification of mechanical properties, backscatter coefficients and attenuation, the samples were prepared for histological staining and epifluorescence microscopy. The alignment of the optical and acoustic lenses allowed the determination of regions of interest (ROIs) which exhibited bound UCAs and ROIs without the presence of the agents. The concurrent epifluorescence and acoustic investigation of overlapping ROIs allows the direct comparison of mechanical properties of normal versus atherosclerotic plaque artery sections. The efficiency of UCA binding rates and the respective exhibition of backscatter and attenuation of these sites were examined. This preliminary study provides new insights on the potential for the detection of vulnerable plaque with intravascular ultrasound (IVUS) and targeted UCAs.
Mayo Clinic College of Medicine, USA
ABSTRACT
Introduction. Arterial elasticity has been proposed as an independent predictor of cardiovascular diseases and mortality. Measure-ment of the wave speed dispersion for different modes of propagation in thin shells can be used to estimate elastic properties of these structures. Using ultrasound radiation force, it is possible to generate local shear waves which can be tracked with pulse-echo ultra-sound to measure their speed of propagation. In the present work, we present a modal analysis performed on an elastic tube and an excised pig carotid artery that can be use to estimate the elastic properties. Methods. A urethane tube and an excised artery were mounted in a metallic frame, cannulated and embedded in tissue-mimicking gelatin. Shear waves were generated in the wall of the tube/artery using a 3 MHz confocal transducer with a 200 μs toneburst, repeated at a rate of 50 Hz. The propagation was measured using pulse-echo ultrasound at 21 locations along the vessel wall spaced 1 mm apart. The transmural pressure was varied from 10 to 100 mmHg in 10 mmHg increments using a column of water. Results. The group velocity of the shear wave for the tube and the ar-tery were significantly different, around 11 m/s for the tube and 5 m/s for the artery. The speed of propagation in the tube showed no variation with increasing tranmural pressure, while in the group velocity of the artery increased from 4 m/s at 10 mmHg to 6.2 m/s at 100 mmHg. The modal analysis using a 2D FFT of the spatio-temporal signal in the tube showed a unique antisymmetric Lamb wave-like mode that was almost invariant with pressure. Meanwhile, the artery exhibited multiple modes, antisymmetric and sym-metric like modes, that varied with pressure. Conclusion. Radiation force is a useful technique to generate localized shear waves in cylindrical shell structures. The changes in the observed dispersion curves in the arteries are very encouraging, suggesting that this methodology has potential use in the study of arterial elasticity.
(1) School of Computer and Communication Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland (2) Department of Electrical Engineering, Stanford University, USA (3) Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, USA
ABSTRACT
Calibration of ultrasound tomography devices is a challenging problem and of highly practical interest in medical and seismic imaging. In this work we address the calibration problem in circular apertures used for breast cancer detection. The sensors are installed on a circular ring and act both as transmitters and receivers. The ring surrounds the breast and the sensors are fired each in turn. In order to estimate the characteristics of the tissue, one needs to solve an inverse problem based on the recorded ultrasound readings. This inverse problem is very sensitive to the exact locations of the sensors. Traditionally, the sensors are assumed to be spaced equidistant apart on the circle, which is not realistic due to practical difficulties in the installment. Thus, a procedure is needed to calibrate the device periodically during its lifetime. We introduce a new method of calibration based on the time-of-flight (ToF) measurements between sensors when the enclosed medium is homogeneous. Knowing all the pairwise ToFs, one can easily find the position of sensors using multi-dimensional scaling (MDS) method. In practice, however, we are facing two major sources of loss. One is due to the transitional behavior of the sensors, which makes the ToF readings for close-by sensors unavailable. The other is due to the random malfunctioning of the sensors, that leads to random missing ToF measurements. Therefore, if we construct a matrix from pairwise ToFs, we encounter two different types of missing entries: structured and random.
On top of the missing entries, one also needs to cope with another source of error. In practice, since the impulse response of the piezoelectric and the time origin in the measurement procedure are not present, a time mismatch is also added to the measurements. In this work, we first show that a matrix defined from all the ToF measurements is of rank at most two, independent of the number of sensors. In order to estimate the structured and random missing entries, utilizing the fact that the matrix in question is shown to be low-rank, we apply a state-of-the-art low-rank matrix completion algorithm. Once we have a good estimate for the missing ToF measurements, we use MDS to find the correct positions of the sensors. Analytic error bounds on the estimated positions are computed in a regime with zero time mismatch and arbitrary SNR. The results are extended to the regime with non-zero time mismatch and high SNR. Eventually, simulations mimicking the measurements of an ultrasound tomography device are performed using the proposed position calibration algorithm and the consistency of results is confirmed.
Mayo Clinic College of Medicine, Ultrasound Research Laboratory, Department of Physiology and Biomedical Engineering, Rochester, MN, USA
ABSTRACT
Improved methods for prostate guided-biopsy are required to effectively guide needle biopsy to the suspected site. In addition, tissue stiffness measurement would help identify a suspected site to perform biopsy because stiffness has been shown to correlate with pathology. An innovative approach known as Shearwave Ultrasound Vibrometry (SDUV) has provided strong evidence of adequate tissue viscoelasticity characterization in vivo. For prostate applications, it is important that the SDUV technique be guided by an imaging modality leading to a better biopsy sampling and reduction of the sampling error. The objective here is directed toward introducing a combined system of Vibro-acoustography (VA) and SDUV to perform "virtual biopsy" at a specific location, in which measurement of prostate shear elasticity and viscosity may be obtained.
VA uses two intersecting ultrasound beams with slightly different frequencies to produce a dynamic radiation force at the difference frequency Df. An acoustic emission field is thus produced which is detected by a low-frequency hydrophone. The signal is used to form an image of the object (i.e. the prostate). The SDUV method uses a "push" transducer transmitting repeated tone-bursts of ultrasound to capture propagating shear waves within the studied medium at the transducer focus. A shear wave propagating outwards from the vibration center can be monitored by a "detect" transducer operating in pulse-echo mode at two locations along the propagation path. The propagation speed of a shear wave is estimated by tracking its phase change over the distance it has propagated. The phase velocity of the shear wave is characterized at a number of selected frequencies to assess the dispersion of its wave velocity. The resulting shear wave velocity dispersion curve is fit with a Voigt model to solve for its elasticity and viscosity. Here, the envisioned operation of SDUV on this prostate was as follows: initially an image of the prostate is taken using VA to locate a site for SDUV. Then, a location of interest is selected within the VA image and the ultrasound "push" transducer temporarily switches to SDUV mode to measure prostate elasticity and viscosity at the specified location. A pilot study demonstrated the feasibility of combining VA to guide SDUV shear elastic modulus and viscosity measurements of a prostate in vitro. These results provide substantial motivation to further develop a VA system to monitor SDUV "virtual biopsy" in the prostate.
(1) Mayo Ultrasound Lab, USA (2) University of Kansas, USA
ABSTRACT
Diastolic dysfunction is the inability of the left ventricle (LV) to supply sufficient blood volume to the systemic circulation under physiological conditions and is often accompanied with LV myocardial stiffening. To this end, our group has been investigating the use of Shearwave Dispersion Ultrasound Vibrometry (SDUV), a noninvasive ultrasound based method for quantifying viscoelasticity of the myocardium. The primary motive of this study is the design and testing of a viscoelastic material suitable for validation of the Lamb wave model in the heart. Here, we report the results of quantifying viscoelasticity of urethane rubber samples using SDUV and our embedded sphere method. A urethane plate (11.5 cm x 8 cm x 1.2 cm) was embedded in gelatin inside a plastic container and mounted on a stand inside a water tank. A mechanical actuator was used to induce harmonic waves in the frequency range 40 - 500 Hz. A 5 MHz pulse-echo transducer operating at a 4 kHz pulse repetition rate was used to detect the motion at multiple points away from the excitation point. Linear regression of the phase data provided estimates of shear wave speed at each frequency (shear wave dispersion). An antisymmetric Lamb wave model was fitted to the dispersion data to estimate elasticity and viscosity of the material. ABAQUS finite element model (FEM) of a viscoelastic plate submerged in water was used to study the appropriateness of the Lamb wave dispersion equations [4]. Prony series coefficients used to define the material properties of the FEM model were used in MATLAB simulations of the antisymmetric Lamb wave equations for comparative purposes.
An embedded sphere method [3] based on measuring displacement of a solid sphere inside the material of interest due to constant radiation force was used as an independent measurement of the viscoelasticity of the urethane rubber. The FEM dispersion data were in excellent agreement with the theoretical predictions. Elasticity and viscosity of the urethane rubber were 45.0±1.0 kPa and 5.5±0.5 Pas, using SDUV, and 46.6±3.2 kPa and 5.73±0.78 Pas, using the embedded sphere method. The agreement of the FEM and the theoretically predicted Lamb wave dispersion suggests that the mathematical model accurately describes the motion of the medium. The values of elasticity and viscosity measured using the SDUV and embedded sphere methods agree within one standard deviation, suggesting that the SDUV method has the capacity to produce accurate estimates of material properties.
(1) Faculty of Engineering, Shinshu University, Nagano, Japan (2) Research Institute of Electrical Communication, Tohoku University, Sendai, Japan
ABSTRACT
Head-related transfer functions (HRTFs) play a crucial role in sound localization by human auditory system. Many researchers have investigated how HRTFs relate to sound localization, and have developed virtual auditory displays (VADs) which present virtual sound sources synthesized with HRTF-based audio signal processing. However, HRTFs have strong individuality because each listener's head and external ear shape differ. Therefore, an ideal 3D auditory space synthesis using HRTFs necessitates personal HRTF measurements or simulations, thereby degrading the versatility of such HRTF-based VADs. Resolution of problems related to HRTF individual variation necessitates clarification of a physical mechanism that yields spectral notches and peaks depending on a pinna shape. Researchers have therefore investigated the relation between HRTFs and the pinna shape using measurements of real and artificial heads or ear replicas. Moreover, numerical simulations, such as the boundary element method (BEM) or the finite difference time domain (FDTD) method, are considered as practical means to study this issue. As described herein, surface pressures on a pinna and HRTFs are calculated using BEM for various sound source elevation angles. The simulated surface pressures are analyzed in the time and frequency domains. Each boundary element on the surface is regarded as a secondary source radiating a sound wave corresponding to a reflection from the surface region, thereby enabling direct observation of a pinna's effects on HRTFs. Numerical results demonstrate the extent to which each part of a pinna contributes to a production of HRTF spectral notches and peaks depending on a source elevation.
School of Engineering, Edith Cowan University, Joondalup, WA, Australia
ABSTRACT
In this paper, we demonstrate the principle of acoustic transmission for communications and power supply, in-vivo. The acoustic transmissions are intended to be used for fixed implanted biomedical devices, such as pacemakers, but more importantly, neural implants were wired and wireless RF communications cannot be used. The acoustic transmissions can be used for both wireless communications and to recharge the device, in-vivo, using conventional piezoelectric power harvesting techniques. Current research in biomedical engineering is looking at implantable devices to regulate conditions such as Parkinson's and other neuromuscular conditions. Transient devices, such as those used in the gastrointestinal track, make use of high frequency RF, were the permittivity of the human body begins to decrease. However, significant power is still required. This results in local tissue heating, due to the absorption of the EM radiation. This heating has side effects that limit the exposure times for safe practices. For neural implants, were the goal is to have the product implanted for long periods of time, without complications and minimal side effects, RF communications cannot currently be used. Acoustic transmissions represent an ideal low power method of communi-cating with in-vivo biomedical devices, and for recharging them through power harvesting. In this work, we present results showing the performance of the communications channel and sample communications signals, through a bio-logical specimen. The frequency response, transfer function and transient response (at resonance) of the communica-tions channel were measured. Due to the frequency response of the communications channel, PSK was chosen as the modulation method. Successful communication was achieved through the communications channel. We also show the result of preliminary work on harvesting the acoustic signals to provide power for recharging in-vivo Biomedical devices.
Faculty of Engineering, Nagasaki University, Nagasaki, Japan
ABSTRACT
In this paper, we propose a novel classification procedure for distinguishing between normal and abnormal respiratory sounds based on a stochastic approach. The main characteristic of our procedure is using two stochastic models to detect the abnormal respiratory sounds precisely: hidden Markov models (HMMs) for acoustic spectral features and bigram models for the occurrence of acoustic segments in each inspiratory/expiratory period. This approach assume that each inspiratory/expiratory period consists of a time sequence of characteristic acoustic segments. Most of respiratory sounds from patients with emphysema pulmonum contain abnormal sound segments called adventitious sounds. Depending on the type of lung disease, the adventitious sounds are classified into several types, and each sound has unique time sequences of specific spectral features and an occurrence sequence of characteristic acoustic segments. We use HMMs to capture the sequence of spectral features, and use the acoustic segment bigrams to design the segmental stochastic occurrence.
We manually labeled our recorded respiratory data and created a transcription corpus using segment symbols. The classification procedure comprises a training process and a test process. In the training process, acoustic models for normal and abnormal respiratory sounds are trained using this transcribed database. For precise acoustic modeling in this process, each acoustic model for adventitious sound segments and breath sound segments is used to express abnormal respiratory sounds. With regard to normal respiratory sounds, the entire period of each respiratory sound was used to generate each HMM for an inspiratory or expiratory period. In the test process, the classification procedure detects segment sequence with the highest total likelihood and yields the classification results. The total likelihood is integrated with the spectral likelihood derived using HMMs and segment occurrence likelihood calculated using acoustic segment bigram models. Classification experiments were carried out using 1544 respiratories from 109 patients and 53 able-bodied subjects, where the number of normal and abnormal respiratory sound was 990 (64%) and 554 (36%), respectively. Our procedure achieved a classification rate of 84.2% between the normal and abnormal respiratory sounds, demonstrating the effectiveness of our stochastic approach. Experimental results also revealed that this modeling led to a 4.8% reduction in the error rate in its classification as compared to that of a conventional method that uses deterministic rules to describe segment sequences instead of the proposed segment bigram.
(1) Department of Acoustics, Aalborg University, Denmark (2) Department of Health Science and Technology, Aalbotg University, Denmark
ABSTRACT
Weak sounds originating from the heart, coronary arteries or the lungs can be used to perform a noninvasive diagnosis of a certain diseases. However, the sounds of interest can be difficult or even impossible to pick up due to loss of signal when the sound is transmitted through the tissue and from the surface of the skin to the transducer. If the impedance of the skin is known, it may be possible to optimize the transducer to achieve an improved signal for a certain frequency range, while attenuating disturbing noise. Further, from a classical stethoscope it is known, that the sound picked up by the stethoscope can be influenced by changing the pressure on the chest piece of the stethoscope. A high pressure will stretch the skin similar to a drum skin, and attenuate lower frequencies, while lighter pressure will broaden the frequency range.
By using an impedance or standing wave tube, it is possible to measure the impedance of the surface of the skin and at the same time investigate the influence of different pressures and diameters of a transducer. The impedance tube is made specifically with the purpose of measuring chest impedances in the frequency range from 50 Hz to 5 kHz. An MLS sequence is used as the excitation signal, and based on the measured impulse responses, the impedance of the surface is calculated. The diameters used for the setup are in the same range as diameters of normal stethoscopes, and the force applied to the tube ranges from close to zero to pressures in the range normally used for auscultation. The study involves the measurement of the chest impedance on several people at locations on the chest, where auscultation of the heart is normally carried out. Knowledge of the chest impedance is aimed at use in optimal selection of transducer (e.g. microphone, force transducer or accelerometer), sensitivity and dynamic range, frequency range, coupling area, coupler geometry etc. for a system to pick up chest sounds. Especially the weak sounds of the heart, e.g. the murmur sounds originating from coronary arteries or the fetal heart sounds, are of interest.
Consultant, USA
ABSTRACT
This paper presents listener envelopment calculations and low-frequency strength G and reverberation time RT measurements in shoebox and non-shoebox concert halls. Soulodre and coworkers have determined the response of listeners exposed to direct sound, early reflections and reverberant sound in answer to the question "rate only your perception of being enveloped or surrounded by the sound." They developed a formula for calculation of LEV that correlated highly with their subjective judgments which included strength factor Glate and lateral fraction LFlate—data that are not available in the literature. An alternate formula is devised here that makes use of overall strength factor G, clarity factor C80 and Binaural Quality Index BQIlate where BQI equals [1-IACClate] , all factors that are available. Calculations of LEV for 21 concert halls are made and correlated with overall strength factor G. Measurements of the relation between Strength G and Reverberation time at 125 Hz made in shoebox and non-shoebox halls are presented. In shoebox halls, the correlation between the two is high, as would be expected from Sabine/Eyring derivations, but in non-shoebox halls there is almost no correlation. The reasons for this result are discussed.
Defence Science & Technology Organisation and University of Sydney Institute of Marine Science, NSW, Australia
ABSTRACT
Marine animals use sound extensively in an environment where vision is usually very limited and sound travels to much larger distances than it does in air. We also make extensive use of sound in the ocean and there is concern about the impact that the noise of human activities has on marine animals. This paper will review what is known about the effects of noise on marine animals in the context of their natural acoustic environment and also relate this to the effects of noise on terrestrial animals. It will discuss the wide range of audibility that marine mammals experience as a result of the variation in natural ambient noise and sound propagation, and how the noise from human activities compares with the noise from natural sources. It will also consider how the extensive knowledge of noise effects on terrestrial animals can be related to effects on marine animals. The challenges for further research will also be discussed, particularly the need to determine the longer term biological effects of noise.
Snecma - Safran Group, Centre de Villaroche, Direction R&T / YUC, 77550 Moissy Cramayel, France
ABSTRACT
International standards governing the noise of newly manufactured aircraft are developed by the International Civil Aviation Organization (ICAO). In view of the long cycles (research, design, development, production, operation, evolution of infrastructures) involved in the air transport business, its purpose is to provide the needed stability, supporting a global long term view for the manufacturers to anticipate future needs through development of affordable technologies. Within ICAO, standards are being developed by the Committee for Aviation Environmental Protection (CAEP). In 2001 CAEP approved the definition of more stringent noise limits (Chapter 4) now effective as of 2006.
As another significant outcome of the whole process, recommendations were made in favour of a "Balanced Approach" encompassing four elements: reduction of noise at the source, land-use planning, noise abatement procedures and aircraft operating restrictions. This concept implies the elaboration and implementation of a process meant to help the assessment and resolution of noise problems at airports in the most cost-effective manner. The Balanced Approach in effect challenges the ICAO member states to "study and prioritise research and development of economically justifiable technology", to foster the development of noise abatement procedures, while emphasizing the importance of land-use planning and environmental management aspects.
In parallel, improved integration of the research community at European level has been pursued. Through the various individual projects and the networking efforts carried out over the last ten years, the European aircraft noise research community has now reached a critical mass. A network of national Focal Points has then been established to favor efficient coordination of expertise at national level, leading to a better exploitation of national programs around common European priorities. Representatives of CIS, South America and Mediterranean regions have also been included in the network to foster further international cooperation.
The Graeme Clark Centre for Bionic Ear and Neurosensory Research, La Trobe University, Melbourne, Australia
ABSTRACT
The multi-channel cochlear implant is the first clinically successful interface between the world of sound and human consciousness, and the first means of giving severely deaf people hearing and speech understanding, and children spoken language. It has arisen from multi-disciplinary research in neurophysiology; communication, electronic, mechanical and bioengineering; neurobiology; anatomy and pathology; surgery; psychophysics; speech science; audiology and education. Physiological research showed that brainstem nerve cells could only respond to electrical stimuli up to 500pulses/, but they fired deterministically rather than stochastically which is the case with sound. Behavioural research in the experimental animal confirmed this limitation and showed temporal coding could occur when stimulating different sites along the cochlea. Electrophysiological and mathematical studies demonstrated how to stimulate the cochlea for limited place coding. After a series of anatomical, pathological, and biological studies we established it to be safe to implant deaf people. Psychophysical findings from the first patient demonstrated how simple and complex electrical stimuli were perceived. There was a relation between timbre and site of stimulation that could be scaled and they were perceived as vowels. This led to the first successful speech coding strategy that extracted the second formant frequency for place coding. A second patient had similar results and demonstrated that the memory for speech sounds could be retained for many years. Later research also established that the strategy was appropriate for tonal languages. The extraction of other formant and band-pass filtered frequencies for place coding has shown a steady improvement in results. The research is also providing an understanding of neural processing in the auditory pathways, and how this underlies the conscious experience of speech. The research directions to achieve high fidelity sound for hearing in noise and appreciating music are now more clearly defined. The speech processing strategy is very effective especially for young deaf children diagnosed under 12 months of age. With good education they can achieve near normal spoken language. Bilateral implants can provide sound localization and some improved hearing in noise. It has been developed industrially through close collaboration between the University of Melbourne's re-search group and the company Cochlear Limited created to take university research to the clinic and market place. The bionic ear has also paved the way for a new discipline in Medical Physics and Biomedical Engineering, I have called Medical Bionics.
(1) Universität der Bundeswehr Munich, Germany (2) Technische Universität Dresden, Germany
ABSTRACT
In recent years, many papers have been published on contribution analysis to identify acoustic sources. This paper discusses a new method to identify noise sources. It is based on an energy approach which, for external problems, utilizes the radiated sound power to base the contribution analysis on. The contribution analysis results in a surface distribution of real positive values only. It is an advantage that the positive contributions are summing up only and do not erase each other. Their physical meaning, however, remains unclear so far. The presentation will discuss this problem and further discussion in the auditorium is encouraged.
Sonochemistry Centre, Coventry University, UK
ABSTRACT
Environmentally friendlier preparations of chemical compounds and organic or inorganic materials are generally accompanied by the concept of saving resources by optimizing reaction conditions and/or introducing new process technologies. The use of ionic liquids and a solvent-free approach are among these technologies, but in terms of the minimisation of energy and optimization of reaction control sonochemistry has proved to be a real option for industry. It is interesting to trace the development of ultrasonic process optimisation in that it is driven from both industry and academia with the former looking for commercial advantage and the latter for innovative research. In the middle somewhere there is always the important question of how to deliver the ultrasound properly. It is the answer to this central question upon which the future of industrial sonochemistry lies and maybe we are now coming nearer to that answer. The knowledge has always been out there somewhere but because it does not reside in one person or one academic discipline or even one industrial manufacturer it is not easy to determine. Fortunately with a greater cross-disciplinary interest in scale-up a number of larger installations are being developed. Examples will be chosen from a number of fields which illustrate the wide ranging applicability of this technology in the chemical and processing industries and some practical applications which use high-power ultrasound will be reviewed.
(1) Laboratory for Language Development, RIKEN Brain Science Institute, Wako-shi, Japan (2) Department of Psychology and Neuroscience, Duke University, Durham, USA
ABSTRACT
Infants learn much about the phonology of their own language during the first year of their lives. To date, however, the vast majority of the research on infant speech perception has been carried out with infants learning English and other European languages, and we know very little about how infants learning other languages learn the sound system of their languages. The phonological characteristics of Japanese differ from English and other European languages in important ways, and investigation of its acquisition has a potential of shedding important light onto our understanding of phonological acquisition. In this paper, we present data from Japanese are presented to exemplify this point; acquisition of mora-timed rhythm, edge-prominent prosody, lexical pitch-accent and segmental distribution.
(1) Massachusetts Institute of Technology, 77, Massachusetts Avenue, Cambridge, USA. (2) Northeastern University, 360, Huntington Avenue, Boston, USA. (3) Southeast Fisheries Science Center, 3209 Frederic Street, Pascagoula, USA. (4) Institute of Marine Research, Post Office Box 1870, Nordnes, Bergen, Norway. (5) Northeast Fisheries Science Center, 166 Water Street, Woods Hole, USA.
ABSTRACT
Ocean Acoustic Waveguide Remote Sensing (OAWRS) has been recently shown to be capable of instantaneously imaging and continuously monitoring fish populations over large continental shelf-scale areas at an areal rate of tens of thousands to millions of times greater than that of conventional methods. Here we discuss the fundamental principles of ocean waveguide propagation and scattering as well as the technology that makes OAWRS possible.
With the 'first look' of OAWRS on the New Jersey continental shelf in the spring of 2003, we were able to make a number of fundamental scientific discoveries about (1) the instantaneous horizontal structural characteristics, (2) temporal evolution and (3) propagation of information within very large fish shoals. These include the findings that; the instantaneous spatial distribution of fish observed follows a power law process, so that structural similarity exists at all scales from meters to tens of km (previously evidence for structural similarity existed only for small scales <100 m); large shoals are far more horizontally contiguous in 2D than was previously believed based on 1D line transect methods which sometimes inaccurately portray them as disjoint clusters and temporal autocorrelation scale of population change within a very large shoal is on the order of minutes; temporal fluctuations in shoal population also follow a power-law process, making the shoals far more predictable; and fish density waves regularly propagate information over km scales, 3 orders of magnitude larger than previously observed, at speeds ten times faster than fish can swim.
General predictions about animal group behaviour believed to apply in nature irrespective of species were confirmed by monitoring the Georges Bank marine ecosystem (Fig. 1A) with OAWRS in 2006. By quantifying the formation process of vast herring shoals (Fig. 1B) during spawning, it was shown that (1) a rapid transition from disordered to highly synchronized behaviour occurs as fish population density reaches a critical value; (2) organized group migration occurs after this transition; and (3) small sets of leaders significantly influence the actions of much larger groups. The spawning process was found to follow a regular diurnal pattern in space and time which proved to be difficult to detect without continuous wide-area sensing abilities.
Physics of Fluids Group and MIRA Institute for Biomedical Technology and Technical Medicine, University of Twente, The Netherlands
ABSTRACT
Contrast-enhanced ultrasound imaging relies on the nonlinear scattering of microbubbles suspended in an ultrasound contrast agent. The bubble dynamics is described by a Rayleigh-Plesset-type equation, and the success of harmonic imaging using contrast agents has always been attributed to the nonlinear behavior predicted by this equation. A surfactant layer of phospholipids stabilizes the microbubbles and it has always been assumed that the visco-elastic properties of the coating lead to an increased stiffness and additional damping of the radial dynamics, hence to a reduction of the nonlinear response of the bubbles. Here we show that the coating material in fact leads to an increased nonlinear bubble response even at low acoustic pressures where the traditional models for coated as well as uncoated bubbles would only predict linear behavior. For a selection of bubbles we show a pronounced skewness of the resonance curve for increasing pressures to be the origin of the threshold' behavior, where it appears as if the bubbles are activated only at elevated pressures. Another set of bubbles shows a compression-only' behavior, where the bubbles are observed to efficiently compress, while their expansion is highly reduced. Moreover, the majority of these bubbles display a very strong subharmonic response. The shell-buckling model by Marmottant et al. accounts for buckling and rupture of the shell and captures all of the above cases for a unique set of the shell parameters, the relevant parameter being the phospholipid concentration at the bubble interface.
School of Physics, The University of New South Wales, Sydney 2052, Australia
ABSTRACT
In many wind instruments, a non-linear element (the reed or the player's lips) is loaded by a downstream duct - the bore of the instrument - and an upstream one - the player's vocal tract. Both behave nearly linearly. In a simple model due to Arthur Benade, the bore and tract are in series and this combination is in parallel with the impedance associated with vibration of the reed or player's lips. A recent theme for our research team has been measuring the impedance in the mouth during performance. This is an interesting challenge, because the sound level inside the mouth is tens of dB larger than the broad band signal used to measure the tract impedance. We have investigated the regimes where all three impedances have important roles in determining the playing frequency or the sound spectrum. This talk, illustrated with demonstrations, presents some highlights of that work, looking at several different instruments. First order models of the bore of flutes, clarinets and oboes - the Physics 101 picture - are well known and used as metaphors beyond acoustics. Of course, they are not simple cylinders and cones, so we briefly review some of the more interesting features of more realistic models before relating performance features and instrument quality to features of the input impedance spectrum. Acousticians and sometimes musicians have debated whether the upstream duct, the vocal tract, is important. Setting aside flute-like instruments, the bore resonances near which instruments usually operate have high impedance (tens of MPa.s.m-3 or more) so the first order model of the tract is a short circuit that has no effect on the series combination. In this country, that model is quickly discarded: In the didjeridu, rhythmically varying formants in the output sound, produced by changing geometries in the mouth, are a dominant musical feature. Here, the impedance peaks in the tract inhibit flow through the lips. Each produces a minimum in the radiated spectrum, so the formants we hear are the spectral bands falling between the impedance peaks. Heterodyne tones produced by simultaneous vibration of lips and vocal folds are another interesting feature. In other wind instruments, vocal tract effects are sometimes musically important: as well as affecting tone quality, the vocal tract can sometimes dominate the series combination and select the operating frequency, a situation used in various wind instruments. In brass instruments, it may be important in determining pitch and timbre. Saxophonists need it to play the altissimo register, and clarinettists use it to achieve the glissandi and pitch bending in, for example, Rhapsody in Blue or klezmer playing.
National Institute for Public Health and the Environment, The Netherlands
ABSTRACT
In the past 30 years and over research has documented the long term health effects of noise for some distinct outcomes, exposure-response relations are now available and this increasingly facilitates the calculation of the total burden of disease due to noise. Annoyance, sleep disturbance, cognitive and cardiovascular effects have been identified as the main consequences of chronic noise exposure, primarily transport related. For all outcomes further finetuning is still feasible and necessary. Noise and health research has typically been oriented on single sources, single exposures and single health outcomes, with a main focus on noise control. Recently, more integrated and contextual approaches have come forward, in which the health effects of combined noise sources or the combined effect of air- and noise pollution are studied. Another example of an integrated approach is the so called soundscape approach, which is strongly contextual, pays more attention to acoustical quality and the balance between positive and negative aspects of the acoustical environment and the potential restorative function of areas with wanted sound. This approach is still in its infancy especially where effects on health and well being are concerned. There is ongoing debate about which noise metrics are most suitable to predict health effects, especially in relation to sleep disturbance, and the effects of low frequency noise. Additional measures may also be necessary in order to describe acoustical quality at the microlevel in the context of perception, behaviour, social cohesion and the restorative effect of areas with a high acoustic quality. This paper reviews the state of the art of "classic" studies on noise and health and discusses some new approaches and their potential to enhance further understanding of differential health effects of noise. Finally some research needs are put forward which can map the health effects of noise produced by new technologies.