Browsing by Author "Belloch, Jose A."
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Efficient Velvet-Noise Convolution in Multicore Processors(Audio Engineering Society, 2024-06) Belloch, Jose A.; Badia, Jose M.; Leon, German; Välimäki, Ves; Department of Information and Communications Engineering; Audio Signal Processing; Universidad Carlos III de Madrid; Jaume I UniversityVelvet noise, a sparse pseudo-random signal, finds valuable applications in audio engineering, such as artificial reverberation, decorrelation filtering, and sound synthesis. These applications rely on convolution operations whose computational requirements depend on the length, sparsity, and bit resolution of the velvet-noise sequence used as filter coefficients. Given the inherent sparsity of velvet noise and its occasional restriction to a few distinct values, significant computational savings can be achieved by designing convolution algorithms that exploit these unique properties. This paper shows that an algorithm called the transposed double-vector filter is the most efficient way of convolving velvet noise with an audio signal. This method optimizes access patterns to take advantage of the processor's fast caches. The sequential sparse algorithm is shown to be always faster than the dense one, and the speedup is linearly dependent on sparsity. The paper also explores the potential for further speedup on multicore platforms through parallelism and evaluate the impact of data encoding, including 16-bit and 32-bit integers and 32-bit floating-point representations. The results show that using the fastest implementation of a long velvet-noise filter, it is possible to process more than 40 channels of audio in real time using the quad-core processor of a modern system-on-chip.Item GPU-Based Dynamic Wave Field Synthesis Using Fractional Delay Filters and Room Compensation(2017-02-01) Belloch, Jose A.; Gonzalez, Alberto; Quintana-Ortí, Enrique S.; Ferrer, Miguel; Välimäki, Vesa; Jaume I University; Polytechnic University of Valencia; Dept Signal Process and AcoustWave field synthesis (WFS) is a multichannel audio reproduction method, of a considerable computational cost that renders an accurate spatial sound field using a large number of loudspeakers to emulate virtual sound sources. The moving of sound source locations can be improved by using fractional delay filters, and room reflections can be compensated by using an inverse filter bank that corrects the room effects at selected points within the listening area. However, both the fractional delay filters and the room compensation filters further increase the computational requirements of the WFS system. This paper analyzes the performance of a WFS system composed of 96 loudspeakers which integrates both strategies. In order to deal with the large computational complexity, we explore the use of a graphics processing unit (GPU) as a massive signal co-processor to increase the capabilities of the WFS system. The performance of the method as well as the benefits of the GPU acceleration are demonstrated by considering different sizes of room compensation filters and fractional delay filters of order 9. The results show that a 96-speaker WFS system that is efficiently implemented on a state-of-art GPU can synthesize the movements of 94 sound sources in real time and, at the same time, can manage 9216 room compensation filters having more than 4000 coefficients each.Item Multicore implementation of a multichannel parallel graphic equalizer(SPRINGER, 2022-09) Belloch, Jose A.; Badía, José M M.; León, German; Bank, Balázs; Välimäki, Vesa; Universidad Carlos III de Madrid; Jaume I University; Budapest University of Technology and Economics; Dept Signal Process and AcoustNumerous signal processing applications are emerging on mobile computing systems. These applications are subject to responsiveness constraints for user interactivity and, at the same time, must be optimized for energy efficiency. Many current embedded devices are composed of low-power multicore processors that offer a good trade-off between computational capacity and low power consumption. In this context, equalizers are widely used in multiple mobile-based applications such as “Music streaming” to adjust the levels of bass and treble in sound reproduction. In this study, we evaluate a graphic equalizer from audio, computational capacity, and energy efficiency perspectives, as well as the execution of multiple real-time equalizers running on an embedded quad-core processor of a mobile device. To this end, we experiment with the working frequencies as well as the parallelism that can be extracted from a quad-core ARM Cortex-A57. Results show that using high CPU frequencies and three or four cores, our parallelalgorithm is able to equalize more than five channels per watt in real time with an audio buffer of 4096 samples, which implies a latency of 92.8 ms at the standard sample rate of 44.1 kHz.