[dipl] Perustieteiden korkeakoulu / SCI
Permanent URI for this collectionhttps://aaltodoc.aalto.fi/handle/123456789/21
Browse
Browsing [dipl] Perustieteiden korkeakoulu / SCI by Department "Department of Computer Science and Engineering"
Now showing 1 - 8 of 8
- Results Per Page
- Sort Options
- Analysis of human voice production using inverse filtering, high-speed imaging, and electroglottography
Master's thesis(2005) Pulakka, HannuHuman voice production was studied using three methods: inverse filtering, digital high-speed imaging of the vocal folds, and electroglottography. The primary goal was to evaluate an inverse filtering method by comparing inverse filtered glottal flow estimates with information obtained by the other methods. More detailed examination of the human voice source behavior was also included in the work. Material from two experiments was analyzed in this study. The data of the first experiment consisted of simultaneous recordings of acoustic speech signal, electroglottogram, and high-speed imaging acquired during sustained vowel phonations. Inverse filtered glottal flow estimates were compared with glottal area waveforms derived from the image material by calculating pulse shape parameters from the signals. The material of the second experiment included recordings of acoustic speech signal and electroglottogram during phonations of sustained vowels. This material was utilized for the analysis of the opening phase and the closing phase of vocal fold vibration. The evaluated inverse filtering method was found to produce mostly reasonable estimates of glottal flow. However, the parameters of the system have to be set appropriately, which requires experience on inverse filtering and speech production. The flow estimates often showed a two-stage opening phase with two instants of rapid increase in the flow derivative. The instant of glottal opening detected in the electroglottogram was often found to coincide with an increase in the flow derivative. The instant of minimum flow derivative was found to occur mostly during the last quarter of the closing phase and it was shown to precede the closing peak of the differentiated electroglottogram. - Application of multiway methods for dimensinality reduction to music
School of Science | Master's thesis(2013) Ramaseshan, AjayThis thesis can be placed in the broader field of Music Information Retrieval (MIR). MIR refers to a huge set of strategies, software and tools through which computers can analyse and predict interesting patterns from audio data. It is a diverse and multidisciplinary field, encompassing fields like signal processing, machine learning, and musicology and music theory, to name a few. Methods of dimensionality reduction are widely used in data mining and machine learning. These help in reducing the complexity of the classification/clustering algorithms etc., used to process the data. They also help in studying some useful statistical properties of the dataset. In this Master's Thesis, a personalized music collection is taken and audio features are extracted from the songs, by using the Mel spectrogram. A music tensor is built from these features. Then, two approaches to unfold the tensor and convert it into a 2-way data matrix are studied. After unfolding the tensor, dimensionality reduction techniques like Principal Components Analysis (PCA) and classic metric Multidimensional Scaling (MDS) are applied. Unfolding the tensor and performing either MDS or PCA is equivalent to performing Multiway Principal Component Analysis (MPCA). A third method Multilevel Simultaneous Component Analysis (MLSCA), which builds a composite model for each song is also applied. The number of components to retain is obtained by hold-out validation. The fitness of each of these models were evaluated with the T2 and Q statistic, and compared with each other. The aim of this thesis is to produce a dimensionality reduction which can be used for further MIR tasks like better clustering of data with respect to e.g. artists / genres. - Compression of Marine Vessel Telemetry
School of Science | Master's thesis(2011) Mäyränen, JuusoTransferring telemetry data from marine vessels to onshore data centers is an important aspect of the operation of Eniram Ltd, a company that provides decision making support for crews and shipping companies for real time and long term vessel and fleet optimization. Data transfer has thus far consumed approximately 1-2 GB per month per vessel, which, over satellite links, is a considerable expense to the shipping companies. The purpose of this thesis is to find ways of compressing the data more efficiently and to produce a tool to aid in this. The tool will transform the data files into a more compressible format reversibly and losslessly. Publicly available lossless compression tools are used to compress the transformation output. The relevant theory behind data compression is described, along with several common data compression techniques and algorithms as well as the operation of some available general purpose compression tools. The experiments conducted in this work consisted of compressing various sets of data collected both in their original form and after being transformed by the tool that was implemented. The results are compared to the current state-of-the-art in general-purpose compression tools. The findings show that, using fairly simple methods, the data can be compressed to less than one-fifth of the previous compressed size. - Designing practices for making use of tacit knowledge in a knowledge work environment
Master's thesis(2007) Ylihärsilä, KariThis thesis deals with making use of tacit knowledge in a knowledge work environment. Purpose of this research is to design practices, tools and ways of working for making use of tacit knowledge in a knowledge intensive corporation. Making use of tacit knowledge was researched in the areas of new employee introduction, using electronic workspaces and working with virtual teams. The practices were designed using a constructive research approach. 30 persons were interviewed within the study. The research questions are the following: 1) What is tacit knowledge? 2) What are best practices for making use of tacit knowledge in a knowledge work environment? 3) What are key issues to be taken into account when introducing these practices in a knowledge work corporation? In the literature study part tacit knowledge is defined and ways of sharing tacit knowledge are identified. Introducing practices for making use of tacit knowledge inside organizations are also examined. In the empirical part best practices were gathered from the organization, which were refined through theoretical understandings. Different maturity levels were reached in each area of research. Some of these are presented as tools to use and some as conceptual understandings. Concrete tools for transferring tacit knowledge in new employee introduction were designed. These were tested unofficially and have reached a level of maturity ready for pilot-testing. These tools deal mainly with making use of implicit knowledge, which the part of tacit knowledge that can be put into words. In the areas of electronic workspaces and virtual teams the results are presented as conceptual understandings. In addition to understanding and methods, making use of tacit knowledge was found to be mostly about enabling the employees' intrinsic motivation for sharing their knowledge. This can occur for example through stories, comments and concept creation. A model was empirically verified consisting of cultural, intentional, systemic and behavioural perspectives on motivation for knowledge sharing. - Intelligent two-factor authentication – Deciding authentication requirements using historical context data
School of Science | Master's thesis(2014) Jarva, OlliThis thesis is a case study of designing and implementing a more secure authentication service for Futurice Ltd., which is a medium-sized software consulting company. Majority of its employees are located in various client offices and need to access internally developed web services over the Internet. In this environment, single password authentication does not provide a sufficient level of security. The standard solution is to require a second authentication factor, such as one-time code in addition to the password. At Futurice, the old authentication service requested authentication once every 12 hours. These unnecessary interruptions annoy users, thus reducing their satisfaction with the tools. Furthermore, when performing complicated tasks, recovering from small interruptions may take several minutes. Deploying two-factor authentication has not been possible because of the increased negative effects of the interruptions. Therefore, the goal of this thesis is to minimize the number of authentication prompts the user encounters, while increasing the security by enabling two-factor authentication. The new two-factor authentication service presented in this thesis uses historical activity data to decide when the user should be reauthenticated. With the new system, reauthentication was requested approximately once per two weeks (on average), resulting in 90% reduction in the number of authentication prompts, without compromising security. Two-factor authentication is not required when there is other evidence from the context data that the user is authentic. A brief inspection of the EU and Finnish laws indicated that the data collection and processing for context based authentication is acceptable. Our results show that the time and effort spent on authentication processes can be reduced with relatively small effort. Similar results should be achievable in other companies and organizations. Thresholds for various algorithms may require tuning, and future work is needed to automate this. - Measurement And Analysis Of Networking Performance In Virtualised Environments
School of Science | Master's thesis(2014) Chauhan, ManeeshMobile cloud computing, having embraced the ideas like computation offloading, mandates a low latency, high speed network to satisfy the quality of service and usability assurances for mobile applications. Networking performance of clouds based on Xen and Vmw are virtualisation solutions has been extensively studied by researchers, although, they have mostly been focused on network through put and band width metrics. This work focuses on the measurement and analysis of networking performance of VMs in a small, KVM based data centre, emphasising the role of virtualisation overheads in the Host-VM latency and eventually to the overall latency experienced by remote clients. We also present some useful tools such as Drift analyser, Virto Calc and Trotter that we developed for carrying out specific measurements and analysis. Our work proves that an increase in a VM’s CPU workload has direct implications on the network Round trip times. We also show that Virtualisation Overheads (VO) have significant bearing on the end to end latency and can contribute up to 70% of the round trip time between the Host and VM. Furthermore, we thoroughly study Latency due to Virtualisation Overheads as a networking performance metric and analyse the impact of CPU loads and networking workloads on it. We also analyse the resource sharing pat-terns and their effects amongst VMs of different sizes on the same Host. Finally, having observed a dependency between network performance of a VM and the Host CPU load, we suggest that in a KVM based cloud installation, work load profiling and optimum processor pinning mechanism can be effectively utilised to regulate network performance of the VMs. The findings from this research work are applicable to optimising latency oriented VM provisioning in the cloud datacentres, which would benefit most latency sensitive mobile cloud applications. - Mitigating DDoS attacks with cluster-based filtering
School of Science | Master's thesis(2011) Šćepanović, SanjaDistributed Denial of Service (DDoS) attacks are considered one of the major security threats in the current Internet. Although many solutions have been suggested for the DDoS defense, real progress in fighting those attacks is still missing. In this work, we analyze and experiment with cluster-based filtering for DDoS defense. In cluster-based filtering, unsupervised learning is used to create a nor- mal profile of the network traffic. Then the filter for DDoS attacks is based on this normal profile. We focus on the scenario in which the cluster-based filter is deployed at the target network and serves for proactive or reactive defense. A game-theoretic model is created for the scenario, making it possible to model the defender and attacker strategies as mathematical optimization tasks. The ob- tained optimal strategies are then experimentally evaluated. In the testbed setup, the hierarchical heavy hitters (HHH) algorithm is applied to traffic clustering and the Differentiated Services (DiffServ) quality-of-service (QoS) architecture is used for deploying the cluster-based filter on a Linux router. The theoretical results suggest that the cluster-based filtering is an effective method for DDoS defense, unless the attacker is able to send traffic which per- fectly imitates the normal traffic distribution. The experimental outcome con- firms the theoretical results and shows the high effectiveness of cluster-based filtering in proactive and reactive DDoS defense. - Using Delaunay triangulation in infrastructure design software
Faculty of Information and Natural Sciences | Master's thesis(2009) Herva, VilleIn Finland, irregular triangulation has traditionally been used in infrastructural design software, such as road, railroad, bridge, tunnel and environmental design software, to model ground surfaces. Elsewhere, methods like regular square and triangle network, approximating surface without a surface presentation, and algebraic surfaces, have been used for the same task. Approximating the ground surface is necessary for tasks such as determining the height of a point on the ground, interpolating 2D polylines onto the ground, calculating height lines, calculating volumes and visualization. In most of these cases, a continuous surface representation, a digital terrain model is needed. Delaunay triangulation is a way of forming an irregular triangulation out of a 2D point set, in such a way that the triangles are well-formed. Well-formed triangles are essential for the accuracy of the surface representation. This Master's Thesis studies how much time and memory it takes to form a Delaunay triangulation for large point sets, and how Delaunay triangulation compares to other methods of forming a surface representation. In addition, the run-time and accuracy of the resulting surface representations is studied in different interpolation and volume calculation tasks.