[diss] Perustieteiden korkeakoulu / SCI
Permanent URI for this collection
Browse
Browsing [diss] Perustieteiden korkeakoulu / SCI by Title
Now showing 1 - 20 of 1574
Results Per Page
Sort Options
Item Access Control and Machine Learning: Evasion and Defenses(Aalto University, 2019) Juuti, Mika; Tietotekniikan laitos; Department of Computer Science; Secure Systems Group; Perustieteiden korkeakoulu; School of Science; Asokan, N., Prof., Aalto University, Department of Computer Science, FinlandMachine learning (ML) and artificial intelligence (AI) systems have experienced significant proliferation during the recent years, for example in the new market of "machine learning as a service". ML is also increasingly being deployed in security-critical applications, such as access control systems. ML can be used to make security systems easier to use, or to defend against specific attacks, such as the "relay attack". Such ML applications are particularly sensitive to the recent development of "adversarial machine learning", where weaknesses in machine learning systems are exploited to undermine some security-critical property. For example, "evasion attacks" undermine a ML system's prediction integrity, while "model extraction attacks" undermine the system's confidentiality. It has become increasingly important to evaluate ML applications against such undesired behavior. The work described in this dissertation is divided into three parts. In the first part, I evaluate how security properties in so-called transparent authentication systems can be improved using machine learning, and describe how to evaluate security against strong adversaries. In the second part, I present state-of-the-art evasion and model extraction attacks against image classification systems. In the third part, I evaluate state-of-the-art hate speech classifiers against evasion attacks, and present a method of artificially creating credible fake restaurant reviews. Finally, I present general observations and conclusions about both transparent authentication, and the feasibility of using ML for purposes such as moderation.Item Accessing multiversion data in database transactions(Aalto-yliopiston teknillinen korkeakoulu, 2010) Haapasalo, Tuukka; Sippu, Seppo, Prof.; Tietotekniikan laitos; Department of Computer Science and Engineering; Aalto-yliopiston teknillinen korkeakoulu; Soisalon-Soininen, Eljas, Prof.Many important database applications need to access previous versions of the data set, thus requiring that the data are stored in a multiversion database and indexed with a multiversion index, such as the multiversion B+-tree (MVBT) of Becker et al. The MVBT is optimal, so that any version of the database can be accessed as efficiently as with a single-version B+-tree that is used to index only the data items of that version, but it cannot be used in a full-fledged database system because it follows a single-update model, and the update cannot be rolled back. We have redesigned the MVBT index so that a single multi-action updating transaction can operate on the index structure concurrently with multiple concurrent read-only transactions. Data items created by the transaction become part of the same version, and the transaction can roll back. We call this structure the transactional MVBT (TMVBT). The TMVBT index remains optimal even in the presence of logical key deletions. Even though deletions in a multiversion index must not physically delete the history of the data items, queries and range scans can become more efficient, if the leaf pages of the index structure are merged to retain optimality. For the general transactional setting with multiple updating transactions, we propose a multiversion database structure called the concurrent MVBT (CMVBT), which stores the updates of active transactions in a separate main-memory-resident versioned B+-tree index. A system maintenance transaction is periodically run to apply the updates of committed transactions into the TMVBT index. We show how multiple updating transactions can operate on the CMVBT index concurrently, and our recovery algorithm is based on the standard ARIES recovery algorithm. We prove that the TMVBT index is asymptotically optimal, and show that the performance of the CMVBT index in general transaction processing is on par with the performance of the time-split B+-tree (TSB-tree) of Lomet and Salzberg. The TSB-tree does not merge leaf pages and is therefore not optimal if logical data-item deletions are allowed. Our experiments show that the CMVBT outperforms the TSB-tree with range queries in the presence of deletions.Item Accurate modelling of tissue properties in diffuse optical imaging of the human brain(Teknillinen korkeakoulu, 2009) Heiskala, Juha; Lääketieteellisen tekniikan ja laskennallisen tieteen laitosDiffuse optical imaging (DOI) is an emerging imaging modality for non-invasive functional medical imaging, using near infrared (NIR) or visible red light. The innovation is to derive functional information about living tissue from measurements of light that has passed through it. Optical imaging can be applied to imaging of tissues as diverse as the central nervous system, female breast, muscle, and joints of fingers. This thesis addresses the application of DOI to studying the human brain. In this thesis, the problems of modelling light propagation in the adult and infant human head, and reconstructing three-dimensional images of functional changes in the brain using optical measurements, are addressed. Difference imaging, where changes from baseline optical parameters rather than absolute parameter values are reconstructed, is considered. The goal was to develop methods for accurate modelling of light propagation and to clarify how specific aspects of the computational modelling affect the reconstruction of functional images from optical measurements of the human brain. Specifically, the significance of anisotropic light propagation in the white matter, and a priori knowledge of the anatomy and the optical properties of the head and brain are studied. Moreover, a generic probabilistic atlas model of the infant head to enhance image reconstruction is developed. Significance of anisotropic light propagation was found to be small in optical imaging of the adult brain. Although anisotropic light propagation may have a larger impact on the measured signal when infants are imaged, results suggest that image reconstruction can be performed without taking anisotropy into consideration. The use of a priori anatomical knowledge was found to significantly improve the accuracy and robustness of image reconstruction in difference imaging. The results suggest that for optimal reconstructions, individual MR imaging based anatomical data should be used when possible. For cases where individual anatomical data is not available, atlas models should be developed. An important consideration is how to obtain the baseline optical parameters of tissue classes in the anatomical model. Literature-derived parameters can be used as a starting point. For optimal results however, methods should be developed for estimating the baseline parameters from measured data.Item Acoustic and optical investigations of superfluid 3He(Helsinki University of Technology, 1993) Manninen, Antti; O.V. Lounasmaa -laboratorio; O.V. Lounasmaa Laboratory; Perustieteiden korkeakoulu; School of Science; Pekola, JukkaItem Acoustic Scattering for Spatial Audio Applications(Aalto University, 2022) Gonzalez, Raimundo; Politis, Archontis, Dr., Tampere University of Technology, Finland; Tietotekniikan laitos; Department of Computer Science; Perustieteiden korkeakoulu; School of Science; Lokki, Tapio, Prof., Aalto University, Department of Signal Processing and Acoustics, FinlandModeling of sound propagation on the context of acoustic design and interactive applications have mainly focused on room acoustics as well as source and receiver modeling. In order to enrich the description and perceptual immersion of virtual sound-fields, modeling frameworks can also include the effects of scattering of bodies within the physical space. One of the main challenges in modeling the effects of scattering, is that its behaviour not only depends on the geometry of the scatterer but also the direction of arrival of the incident field. This thesis is a collection of five publications, the first two studies focus on the effects of near-field sources, and the last three studies involve the effects of scattering within spatial audio applications. The first publication explores the effects of near-field sources on High-order Ambisonics recording, processing and binaural reproduction. Results indicate that while near-field sources introduce low-frequency proximity gains in high-order microphones arrays, the regularization stages in Ambisonics recording prevents excessive gains. The second publication explores the directivity of near-field speech of 24 subjects and evaluates various repeatable speech reproduction alternatives. The third publication presents a scheme for encoding the acoustic scattering of arbitrary geometries into the spherical harmonic domain. After encoding, the scattering is represented as a multiple-input multiple-output matrix which describes the relation between the incoming and outgoing scattering modes of a geometry. This method allows for the standard transformations in the spherical harmonic domain (rotation, translation, scaling) and it is compatible with existing spatial audio frameworks such as Ambisonics and image-source methods. This method is validated using boundary element method simulations and indicates minimal synthesis error. The fourth publication presents a method to encode the space domain signals from a microphone array with arbitrary geometry and irregularly distributed sensors into Ambisonics. The algorithm relies on the array response and its enclosure's scattering properties to solve the direction of various active sources as well as the diffuse properties of the sound-field. Objective and subjective evaluations indicate that the proposed method outperforms traditional linear encoding. The fifth publication extends the method presented in the third publication by allowing sector-based encoding of acoustic scattering, optimal for geometries and surfaces which do not require entire spherical radiation. This last publication also presents a method to compress the data of the scattering matrix, allowing for more efficient memory storage. Methods proposed in the third and fifth publications can be used to introduce scattering geometries into interactive sound environments to produce more descriptive sound-fields while the fourth publication can be used to develop Ambisonic recording arrays on practical devices such as wearables and head-mounted displays.Item Adaptive combinations of classifiers with application to on-line handwritten character recognition(Helsinki University of Technology, 2007-03-29) Aksela, Matti; Department of Computer Science and Engineering; Tietotekniikan osasto; Laboratory of Computer and Information Science; Informaatiotekniikan laboratorioClassifier combining is an effective way of improving classification performance. User adaptation is clearly another valid approach for improving performance in a user-dependent system, and even though adaptation is usually performed on the classifier level, also adaptive committees can be very effective. Adaptive committees have the distinct ability of performing adaptation without detailed knowledge of the classifiers. Adaptation can therefore be used even with classification systems that intrinsically are not suited for adaptation, whether that be due to lack of access to the workings of the classifier or simply a classification scheme not suitable for continuous learning. This thesis proposes methods for adaptive combination of classifiers in the setting of on-line handwritten character recognition. The focal part of the work introduces adaptive classifier combination schemes, of which the two most prominent ones are the Dynamically Expanding Context (DEC) committee and the Class-Confidence Critic Combining (CCCC) committee. Both have been shown to be capable of successful adaptation to the user in the task of on-line handwritten character recognition. Particularly the highly modular CCCC framework has shown impressive performance also in a doubly-adaptive setting of combining adaptive classifiers by using an adaptive committee. In support of this main topic of the thesis, some discussion on a methodology for deducing correct character labeling from user actions is presented. Proper labeling is paramount for effective adaptation, and deducing the labels from the user's actions is necessary to perform adaptation transparently to the user. In that way, the user does not need to give explicit feedback on the correctness of the recognition results. Also, an overview is presented of adaptive classification methods for single-classifier adaptation in handwritten character recognition developed at the Laboratory of Computer and Information Science of the Helsinki University of Technology, CIS-HCR. Classifiers based on the CIS-HCR system have been used in the adaptive committee experiments as both member classifiers and to provide a reference level. Finally, two distinct approaches for improving the performance of committee classifiers further are discussed. Firstly, methods for committee rejection are presented and evaluated. Secondly, measures of classifier diversity for classifier selection, based on the concept of diversity of errors, are presented and evaluated. The topic of this thesis hence covers three important aspects of pattern recognition: on-line adaptation, combining classifiers, and a practical evaluation setting of handwritten character recognition. A novel approach combining these three core ideas has been developed and is presented in the introductory text and the included publications. To reiterate, the main contributions of this thesis are: 1) introduction of novel adaptive committee classification methods, 2) introduction of novel methods for measuring classifier diversity, 3) presentation of some methods for implementing committee rejection, 4) discussion and introduction of a method for effective label deduction from on-line user actions, and as a side-product, 5) an overview of the CIS-HCR adaptive on-line handwritten character recognition system.Item Adaptive methods for on-line recognition of isolated handwritten characters(Helsinki University of Technology, 2002-12-14) Vuori, Vuokko; Department of Computer Science and Engineering; Tietotekniikan osasto; Laboratory of Computer and Information Science; Informaatiotekniikan laboratorioThe main goal of the work presented in this thesis has been the development of an on-line handwriting recognition system which is able to recognize handwritten characters of several different writing styles and is able to improve its performance by adapting itself to new writing styles. The recognition method should be applicable to hand-held devices of limited memory and computational resources. The adaptation process should take place during normal use of the device, not in some specific training mode. For the usability aspect of the recognition system, the recognition and adaptation processes should be easily understandable to the users. The first part of this thesis gives an introduction to the handwriting recognition. The topics considered include: the variations present in personal handwriting styles; automatic grouping of similar handwriting styles; the differences between writer-independent and writer-dependent as well as on-line and off-line handwriting recognition problems; the different approaches to on-line handwriting recognition; the previous adaptive recognition systems and the experiments performed with them; the recognition performance requirements and other usability issues related to on-line handwriting recognition; the current trends in on-line handwriting recognition research; the recognition results obtained with the most recent recognition systems; and the commercial applications. The second part of the thesis describes an adaptive on-line character recognition system and the experiments performed with it. The recognition system is based on prototype matching. The comparisons between the character samples and prototypes are based on the Dynamical Time Warping (DTW) algorithm and the input characters are classified according to the k Nearest Neighbors (k-NN) rule. The initial prototype set is formed by clustering character samples collected from a large number of subjects. Thus, the recognition system can handle various writing styles. This thesis work introduces four DTW-based clustering algorithms which can be used for the prototype selection. The recognition system adapts to new writing styles by modifying its prototype set. This work introduces several adaptation strategies which add new writer-dependent prototypes into the initial writer-independent prototype set, reshape the existing prototypes with a Learning Vector Quantization (LVQ)-based algorithm, and inactivate poorly performing prototypes. The adaptations are carried out on-line in a supervised or self-supervised fashion. In the former case, the user explicitly labels the input characters which are used as training samples in the adaptation process. In the latter case, the system deduces the labels from the recognition results and the user's actions. The latter approach is prone to erroneously labeled learning samples. The different adaptation strategies were experimented with and compared with each other by performing off-line simulations and genuine on-line user experiments. In the simulations, special attention has been paid to the various erroneous learning situations likely to be encountered in real world handwriting recognition tasks. The recognition system is able to improve its recognition accuracy significantly on the basis of only a few additional character samples per class. Recognition accuracies acceptable in real world applications can be attained for most of the test subjects. This work also introduces a Self-Organizing Map (SOM)-based method for analyzing personal writing styles. Personal writing styles are represented by high-dimensional vectors, the components of which indicate the subjects' tendencies to use certain prototypical writing styles for isolated characters. These writing style vectors are then visualized by a SOM which enables the detection and analysis of clusters of similar writing styles.Item Adaptive OSS: Principles and Design of an Adaptive OSS for 5G Networks(Aalto University, 2024) Mfula, Harrison; Nurminen, Jukka K., Prof., University of Helsinki, Finland; Tietotekniikan laitos; Department of Computer Science; Perustieteiden korkeakoulu; School of Science; Ylä-Jääski, Antti, Prof., Department of Computer Science, Aalto University, FinlandIn recent years, the rise and continued popularity of connected applications has resulted in explosive growth in the demand for wireless broadband services of high speed, massive capacity, and ultra-low latency, such as video-on-demand services, Internet of Things, and mission critical applications. 5G technology is designed to provide the required connectivity in these applications. As a consequence of its continued success, seamless connectivity has become synonymous to a human right. Suffice to say, at the moment, due to the vast potential benefits of 5G technology, there is a kind of gold rush driving rapid worldwide deployments of 5G networks that has led to a significant gap in investment, research, and development of suitable operation support system (OSS) solutions for daily operation, monitoring and control of 5G networks. Furthermore, as the number of 5G deployments continue to rise, high data traffic volumes and stakeholder expectation of seamless connectivity from anything to anything has become the norm. In this regard, the need for suitable OSS solutions has become critical. This dissertation fills the identified gap in the following way, first, we design a scalable architecture that enables batch and stream processing of high throughput, high volume, and ultra-low-latency data driven OSS solutions to effectively support existing and 5G OSS use cases. Building on the resulting architecture, we extend existing, and in some cases develop new SON algorithms to meet 5G requirements. Particularly, we develop adaptive algorithms which focus on self-configuration, self-optimization, self-healing, and SON-coordination use cases. Furthermore, we introduce solutions for transitioning from the current mainly proprietary OSS hardware to vendor agnostic cloud-native dynamic infrastructure. Lastly, we make digitization of OSS operations more efficient. Specifically, we develop an artificial intelligence based solution (AIOps) for conducting OSS operations efficiently at cloud scale. Using the findings and proposed solutions in this dissertation, vendors, and service providers can design and implement suitable solutions that meet stringent business and technical requirements of applications running on top of 5G networks and beyond.Item Adaptive probabilistic roadmap construction with multi-heuristic local planning(Helsinki University of Technology, 2003-09-26) Isto, Pekka; Department of Computer Science and Engineering; Tietotekniikan osasto; Industrial IT Laboratory; Teollisuuden tietotekniikan laboratorioThe motion planning problem means the computation of a collision-free motion for a movable object among obstacles from the given initial placement to the given end placement. Efficient motion planning methods have many applications in many fields, such as robotics, computer aided design, and pharmacology. The problem is known to be PSPACE-hard. Because of the computational complexity, practical applications often use heuristic or incomplete algorithms. Probabilistic roadmap is a probabilistically complete motion planning method that has been an object of intensive study over the past years. The method is known to be susceptible to the problem of "narrow passages": Finding a motion that passes a narrow, winding tunnel can be very expensive. This thesis presents a probabilistic roadmap method that addresses the narrow passage problem with a local planner based on heuristic search. The algorithm is suitable for planning motions for rigid bodies and articulated robots including multi-robot systems with many degrees-of-freedom. Variants of the algorithm are described for single query planning, single query planning on a distributed memory parallel computer, and a preprocessing type of learning algorithm for multiple query planning. An empirical study of the effect of balance between local and global planning reveals that no universal optimal balance is likely to exist. Furthermore, it appears that many traditional simple local planners are too weak for efficient solving of problems with a narrow passage. The empirical results show that local planners based on backtracking search are more efficient than the more traditional local planners when a motion through a narrow passage is to be planned. The parallel variant has acceptable scalability on a parallel computer built from commodity components. It is also observed that run-time adjustment of the parameters of the search can reduce the variance of the run-cost. The run-cost variance is a known, but little studied deficiency of randomized motion planning methods. It is suggested that the future research in randomized motion planning algorithms should address run-cost variance as an important performance characteristic of the algorithm. The results are obtained with empirical methods and established procedures from design and analysis of experiments. The algorithms are assessed with a number of test problems including known benchmark problems from the literature.Item Adiabatic control in circuit quantum electrodynamics(Aalto University, 2018) Vepsäläinen, Antti; Paraoanu, Sorin, Dr., Aalto University, Department of Applied Physics, Finland; Teknillisen fysiikan laitos; Department of Applied Physics; Kvantti group; Perustieteiden korkeakoulu; School of Science; Pekola, Jukka, Prof., Aalto University, Department of Applied Physics, FinlandIn circuit quantum electrodynamics the coherence of Cooper pairs in superconductors is employed to create macroscopic electric circuits with quantized energy levels. Such circuits can be coupled with each other and exploited as building blocks of a quantum computer. Accurate and robust control of the quantum state in the circuits is a central condition for the operation of the quantum computer and one of the prerequisites for the implementation of complex algorithms used in quantum information processing, such as quantum error correction. In this thesis adiabatic control in circuit quantum electrodynamics is investigated with the focus on manipulating three-level systems. In adiabatic control the eigenstates of the system are slowly modified by changing the external control parameters, which govern the evolution of the system. If the changes in the parameters are slow enough, the state of the system follows the eigenstates in the adiabatic basis, thereby realizing the intended operation. The advantage of adiabatic control is its inherent robustness to small errors or noise in the control parameters; the result of the state manipulation only depends on the asymptotic values of the control parameters, but not on their exact values during the process. Shortcuts to adiabaticity can be used to speed up the otherwise slow adiabatic control by introducing a correction pulse that compensates the diabatic losses during the state manipulation. This allows one to overcome the limitation on the speed of the protocol but simultaneously reduces the method's robustness to the variations in the control parameters. If the level of noise in the control parameters is known, using the shortcut it is possible to find the optimal level of robustness which is required to mitigate the noise. Circuit quantum electrodynamics offers a perfect experimental platform for investigating quantum control due to the possibility of realizing complicated control schemes using microwave electronics. With commercially available digital-to-analog converters the control signals can be digitally created, which enables accurate and coherent control of the quantum circuit. In this thesis both theoretical and experimental results on adiabatic control applied to superconducting transmon circuits are presented. It is shown that stimulated Raman adiabatic passage can be used for population transfer in a three-level transmon, which can be further improved using shortcuts to adiabaticity. Furthermore, a scheme for implementing robust superadiabatic rotation gates in transmon is proposed. Finally, it is demonstrated that superconducting qubit can be used as an ultra-sensitive detector of magnetic flux.Item Adiabatic melting experiment: ultra-low temperatures in helium mixtures(Aalto University, 2020) Riekki, Tapio S.; Tuoriniemi, Juha, Dr., Aalto University, Department of Applied Physics, Finland; Teknillisen fysiikan laitos; Department of Applied Physics; µKI group; Perustieteiden korkeakoulu; School of Science; Hakonen, Pertti, Prof., Aalto University, Department of Applied Physics, FinlandMixture of the two stable helium isotopes, 3He and 4He, is a versatile system to study at low temperatures. It is a mixture of two fundamentally different quantum mechanical particles: fermions and bosons. Bosonic 4He component of the dilute mixture is known to become superfluid at about 2 K, while superfluidity of the dilute fermionic 3He component has not yet been observed. The transition is anticipated to occur at temperatures below 0.0001 K (i.e. 100 uK). To reach such ultra-low temperatures, new cooling methods need to be developed, one of which is the main subject of this thesis. Current, well-established, cooling methods rely on external cooling, where a metallic coolant is used to decrease temperature in a liquid helium sample. Their performance is limited by rapidly increasing thermal boundary resistance. Our novel adiabatic melting method relies on internal cooling process, where both the coolant and the sample are same helium. First, we create a phase-separation in the mixture by increasing its pressure to about 25 times the atmospheric pressure. This solidifies the 4He component, and we ideally end up with a system of pure solid 4He and pure liquid 3He. The phase-separated system is then precooled by conventional methods, after which the solid is melted. This allows 4He to mix with 3He again in heat absorbing process, resulting in a saturated mixture with about 8% molar 3He concentration. In theory, the mixing can reduce temperature by more than a factor 1000, but external heat leaks and imperfect phase-separation reduced this to the factor 5-7 in this work. We study the performance of the melting method under various conditions, such as different melting rates, various total amount of 3He, and alternate configurations of the setup. We also developed a computational model of the system, which was needed to evaluate the lowest achieved temperatures, as the mechanical oscillators used for thermometry had already become insensitive. For it, we studied the thermal coupling parameters of our system, including thermal boundary resistances and 3He thermal conductivity. The lowest resolved temperature was (90 +- 20) uK, still above the superfluid transition of the 3He component of the mixture. We also present suggestions for future improvements for the setup.Item Adopting Agile Methods in Large-scale Organizations using Scaling Frameworks(Aalto University, 2022) Putta, Abheeshta; Paasivaara, Maria, Prof., Aalto University, Finland; Tietotekniikan laitos; Department of Computer Science; Software Process Research Group; Perustieteiden korkeakoulu; School of Science; Lassenius, Casper, Prof., Aalto University, Department of Computer Science, FinlandAgile methods were originally developed for small and co-located teams. The popularity and success of agile methods in small teams led to growing interest on agile adoption across large organizations as well. However, there are several challenges while adopting agile to large, e.g., coordination between large number of teams and integration of other nondevelopment units e.g., HR, and marketing. Scaling frameworks, e.g. Scaled Agile Framework (SAFe) and Large ScaleScrum (LeSS) to support scaling agile to large have become popular in the recent past. Despite of popularity, there is very little scientific research on usage of the scaling frameworks. The primary goal of the thesis is to investigate the adoption and usage of scaling frameworks in practice. The goal is divided into two parts: a) scaling frameworks usage and adoption and b) SAFe usage and adoption. In the first part, we conducted two surveys. The first survey aimed to explore why the frameworks were developed, and how they were evolved, their benefits and challenges directly from the practitioners who developed them. Later, in second survey, we collected data from 204 software practitioners using scaling frameworks to understand the reasons, expected benefits and satisfaction of using them. In the second part, we conducted a multivocal literature review (MLR) due to the lack of scientific evidence on SAFe, to understand the benefits and challenges of SAFe adoption. Next, we conducted an indepth case study to explore the reasons, transformation process, benefits and challenges of SAFe. To get a wider overview of the benefits and challenges of SAFe we conducted a survey, to explore the benefits and challenges of SAFe. Our results for the first part show that majority of the frameworks were designed to improve agility, collaboration, coordination,and synchronization between agile teams. The most common reasons for their adoption were to scale more people and deal with existing challenges and pain points. The benefits of adopting these frameworks were categorized intoto business, product, organizational, and culture and the challenges were categorized to implementation, organizational,and scope. Our results for the second part show that reasons for SAFe adoption are related to organizational, business,and framework-specific. SAFe transformation activities typically map with the SAFe roadmap activities. The most common benefits of SAFe adoption are improved transparency, collaboration and faster time to market. The most significant challenges of SAFe adoption are identifying value streams and forming ARTs, change resistance, and inculcating an agile mindset. More in-depth research on scaling frameworks is needed to establish the effectiveness oftheir usage in practice. We encourage researchers to conduct in-depth case studies on their usage and adoption.Item Adoption of strategic goals : exploring the success of strategy implementation through organizational activities(Helsinki University of Technology, 2007-07-16) Aaltonen, Petri; Department of Industrial Engineering and Management; Tuotantotalouden osasto; Laboratory of Work Psychology and Leadership; Työpsykologian ja johtamisen laboratorioThis study is about the success of strategy implementation. Implementation, the conceptual counterpart of strategy formulation, has been regarded as an extremely challenging area in management practice. Still, strategy implementation has received remarkably less attention in the strategic management literature. The existing implementation frameworks are mostly normative and rather limited. On the other hand, the strategy as practice research agenda has emerged to study strategy on the micro level, as a social phenomenon. Practice researchers have introduced an activity-based view on strategy which is concerned with the day-to-day activities of organizational life which relate to strategic outcomes. Still, there is a clear need to know more about these strategic activities: what are they like, and how are they related to strategic outcomes. This study explores the success of strategy implementation in terms of organizational activities, by focusing on two questions: how are strategic goals realized through organizational activities and how are strategic activities related to the success of strategic goal's adoption? The research questions are addressed empirically in a multiple case study setting, in which qualitative data from 101 interviews plus rich supplementary archival data are generated and analyzed with a grounded theory approach. The analysis produces a general strategic activity categorization consisting of 25 activities under five main activity categories of determining, communicating, controlling, organizing, and interacting with the environment. The activities divide into existing and desired ones, which further divide into enhancing and novel ones. The analysis reveals that successful adoption of a strategic goal is related to the existence of so-called necessary strategic activities, a moderate set of desired activities that enhance the existing ones, and an extensive repertoire of novel desired activities. In addition, the scope of the strategic goal's origin and its coherence with other elements of strategy is proposed to contribute to the adoption of the strategic goal. The study contributes to the strategy as practice discussion by taking the activity-based view seriously and showing in detail what the strategic activities are like and how they are linked to the success of strategy implementation. The research reveals that strategy implementation is a much more complicated, creative, communicative, and external-oriented phenomenon than the extant literature presents. Furthermore, this study adds to the very limited empirical research on how strategies are adopted and enacted on all organizational levels. The practical implications of the study concern critical evaluation of existing and desired activity patterns, as well as understanding the significance of the strategic goal's origin and the coherence of the strategic whole.Item Adoption problems of modern release engineering practices(Aalto University, 2017) Laukkanen, Eero; Itkonen, Juha, Dr., Nitor Delta Oy, Finland; Paasivaara, Maria, Prof., University of Copenhagen, Denmark; Tietotekniikan laitos; Department of Computer Science; Software Process Research Group; Perustieteiden korkeakoulu; School of Science; Lassenius, Casper, Prof., Aalto University, Department of Computer Science, FinlandRelease engineering means the process of bringing the individual changes made to a software system to the end users of the software with high quality. The modern release engineering practices emphasize using build, test and deployment automation and facilitating collaboration across functional boundaries, so that it is possible to achieve both speed and quality in the release engineering process. While some companies have been successful in adopting modern release engineering practices, other companies have found the adoption to be problematic. In this dissertation, we aim to understand what prevents organizations from adopting modern release engineering practices. First, we conducted a systematic literature review to survey the literature about the adoption problems, their causes and solutions. In addition, we conducted four case studies which included qualitative interviews and software repository mining as data collection methods. In the first case study, we investigated the adoption problems in a distributed organization. The case study was extended in a follow-up case study, in order to see how the release stabilization period could be reduced after the adoption efforts. In the third case study, we focused on the consequences of the stage-gate development process and how it explained the adoption problems. Finally, we compared two organizations with different organizational contexts working with similar products in the fourth case study, in order to compare the effects of different organizational contexts. This dissertation identifies, that adopting modern release engineering practices decreases the time needed for stabilizing a software system before it can be deployed. Problems during the adoption of modern release engineering practices are categorized under problem themes of technical adoption and social adoption. Technical adoption problems include build automation (slow, complex or inflexible build), test automation (slow, unreliable or insufficent tests) and deployment automation (slow, complex or unreliable deployment) problems, and social adoption problems include organizational adoption (lack of resources or coordination) and individual adoption (lack of motivation or experience) problems. These primary problems can be explained with three identified explanations: system design (system not testable or deployable) explains technical adoption problems, organizational distribution (more difficult communication, motivation and coordination) explains social adoption problems and limited resources explain both adoption problem themes. Organizations can use the results of the dissertation to design their software processes and practices accordingly to suit modern release engineering practices.Item Advanced mobile network monitoring and automated optimization methods(Helsinki University of Technology, 2006-03-23) Höglund, Albert; Department of Engineering Physics and Mathematics; Teknillisen fysiikan ja matematiikan osasto; Systems Analysis Laboratory; Systeemianalyysin laboratorioThe operation of mobile networks is a complex task with the networks serving a large amount of subscribers with both voice and data services, containing extensive sets of elements, generating extensive amounts of measurement data and being controlled by a large amount of parameters. The objective of this thesis was to ease the operation of mobile networks by introducing advanced monitoring and automated optimization methods. In the monitoring domain the thesis introduced visualization and anomaly detection methods that were applied to detect intrusions, mal-functioning network elements and cluster network elements to do parameter optimization on network-element-cluster level. A key component in the monitoring methods was the Self-Organizing Map. In the automated optimization domain several rule-based Wideband CDMA radio access parameter optimization methods were introduced. The methods tackled automated optimization in areas such as admission control, handover control and mobile base station cell size setting. The results from test usage of the monitoring methods indicated good performance and simulations indicated that the automated optimization methods enable significant improvements in mobile network performance. The presented methods constitute promising feature candidates for the mobile network management system.Item Advanced source separation methods with applications to spatio-temporal datasets(Helsinki University of Technology, 2006-11-03) Ilin, Alexander; Department of Computer Science and Engineering; Tietotekniikan osasto; Laboratory of Computer and Information Science; Informaatiotekniikan laboratorioLatent variable models are useful tools for statistical data analysis in many applications. Examples of popular models include factor analysis, state-space models and independent component analysis. These types of models can be used for solving the source separation problem in which the latent variables should have a meaningful interpretation and represent the actual sources generating data. Source separation methods is the main focus of this work. Bayesian statistical theory provides a principled way to learn latent variable models and therefore to solve the source separation problem. The first part of this work studies variational Bayesian methods and their application to different latent variable models. The properties of variational Bayesian methods are investigated both theoretically and experimentally using linear source separation models. A new nonlinear factor analysis model which restricts the generative mapping to the practically important case of post-nonlinear mixtures is presented. The variational Bayesian approach to learning nonlinear state-space models is studied as well. This method is applied to the practical problem of detecting changes in the dynamics of complex nonlinear processes. The main drawback of Bayesian methods is their high computational burden. This complicates their use for exploratory data analysis in which observed data regularities often suggest what kind of models could be tried. Therefore, the second part of this work proposes several faster source separation algorithms implemented in a common algorithmic framework. The proposed approaches separate the sources by analyzing their spectral contents, decoupling their dynamic models or by optimizing their prominent variance structures. These algorithms are applied to spatio-temporal datasets containing global climate measurements from a long period of time.Item Advanced synthesis of single-walled carbon nanotube films by aerosol CVD method for electro-optical applications(Aalto University, 2019) Iakovlev, Vsevolod Ya.; Krasnikov, Dmitry V., Dr., Skolkovo Institute of Science and Technology, Russia; Teknillisen fysiikan laitos; Department of Applied Physics; Perustieteiden korkeakoulu; School of Science; Kauppinen, Esko I., Prof., Aalto University, Department of Applied Physics, Finland; Nasibulin, Albert G., Prof., Skolkovo Institute of Science and Technology, RussiaSingle-walled carbon nanotubes (SWCNTs) are a unique family of materials emerging towards high performance applications in electronics and optoelectronics. However, despite significant progress over the last 25 years, the problem of SWCNT production with tailored characteristics, the absence of the growth models and postsynthesis methods to improve specific SWCNT properties are still the main barriers towards a wide range of applications. The methods of the SWCNT synthesis, data processing and SWCNT treatment developed are still not fully optimized. In the current thesis, the abovementioned problems were addressed with the following demonstration of advanced applications of SWNT films: A new reactor for the aerosol CVD growth of SWNTs equipped with spark discharge generator of catalyst nanoparticles was developed. The design proposed resulted in a robust apparatus when compared with the ferrocene CVD reactor. The stability and scalability of the SWCNT synthesis are the main benefits of the spark discharge generator. An advanced control of diameter distribution, defectiveness, and the yield for the first time was achieved through the use of artificial neural networks (ANN). This allowed us to achieve precise control towards mutual relation between the reactor parameters and key SWCNT characteristics. The methodology proposed can help with adjusting the diameter distribution, yield and quality of SWCNTs with prediction error of 4%, 14%,23% respectively for very limited data set. A novel technique for the post-synthesis improvement of electro-optical characteristics of SWCNT films by a laser treatment was proposed. In this process by a short pulse laser irradiation, the transparency of SWCNT increases without any changes in conductivity presumably due to the oxidation of the catalyst particles. We improved transparency by 4% and decreased equivalent sheet resistance by 21%. All developed techniques and methods contributed to the synthesis of SWCNT with defined characteristics open a new possibility for their applications. Thesis work includes three different electro-optical devices with advanced performance: i) a bolometer based on freestanding SWCNT film showing response time of 2.6 ms at room temperature 1mbar (several times faster than the corresponding industrially applied devices); ii) an SWCNT-based heating element of fiber Bragg grid for smooth tuning of the resonant wavelength and a stable laser signal; iii) a saturable absorber based on SWCNT films showing femtosecond pulse generation a low degradation rate.Item Advanced Techniques in Magnetic Resonance-Guided High-Intensity Focused Ultrasound(Aalto University, 2017) Tillander, Matti; Nieminen, Heikki J., Docent, University of Helsinki, Finland; Neurotieteen ja lääketieteellisen tekniikan laitos; Department of Neuroscience and Biomedical Engineering; Perustieteiden korkeakoulu; School of Science; Ilmoniemi, Risto, Prof., Aalto University, Department of Neuroscience and Biomedical Engineering, FinlandHigh-intensity focused ultrasound (HIFU) is an emerging therapeutic technique that can be used to heat tissue locally and non-invasively through skin. Heating the tissue to a temperature high enough, e.g., to 57°C, induces instantaneous cell death through thermal coagulative necrosis. Alternatively, HIFU can be used for hyperthermia, in which tissue temperature is maintained within a range of 40–45°C for a longer period of time (e.g., 60 min) to enhance the effect of other therapy modalities, such as radiotherapy and chemotherapy. Magnetic resonance imaging (MRI) as guidance of HIFU treatments (MR-HIFU) provides means for spatially targeting the treatment, measuring the temperature evolution in real-time during heating, and evaluating tissue damage after therapy. The primary aim of this Thesis was to develop a hyperthermia solution for an MR-HIFU system to enable safely heat large volumes at clinically relevant depths from the skin with reliable MRI-based temperature mapping. The solution combined mechanical movements of the ultrasound transducer, electronic ultrasound focus steering, and selective use of transducer elements to control the temperature distribution within the entire acoustic beam path. Several MRI temperature mapping slices were used to control and monitor the heating in real-time. Safe hyperthermia heating with good temperature uniformity was achieved in animal experiments in vivo. Furthermore, in vivo animal experiments and patient imaging study showed that the developed hyperthermia solution was feasible for hyperthermia of recurrent rectal cancer. Finally, technical solutions enabled long-duration MRI-based thermometry with accuracy better than 1°C. The Secondary aim of this Thesis was to present new ways to use phased-array transducers in shaping the emitted acoustic field to gain improvement in performance in different MR-HIFU applications. First, multifoci heating was able to reduce the peak pressure experienced by the tissue in hyperthermia heating when compared to single steered-focus heating. Second, the adjustment of transducer-element phases utilizing time-of-flight estimation based on MRI images improved the focus sharpness in heterogeneous phantom simulating conditions of breast tissue. Third, simulated acoustic intensity predicted heating of bones, which enables fast quantitative reduction of bone heating through deactivation of transducer elements in the context of intercostal sonications. The technological solutions presented in this Thesis advance the field of MR-HIFU towards translation into clinical practice.Item Advances in AI-assisted Game Testing(Aalto University, 2022) Roohi, Shaghayegh; Tietotekniikan laitos; Department of Computer Science; Aalto Game Research Group; Perustieteiden korkeakoulu; School of Science; Hämäläinen, Perttu, Prof., Aalto University, Department of Computer Science, FinlandGame testing is an essential part of game development, in which developers try to select a game design that delivers a desirable experience for the players and engages them. However, the interactive nature of games makes the player experience and behavior unpredictable. Therefore, game testing requires collecting a large amount of playtest data in iterative sessions, which makes game testing time and money consuming. Game testing includes a wide range of aspects from finding bugs and balancing game parameters to modeling player behavior and experience. This dissertation mostly concentrates on the player experience aspect. It proposes methods for (partially) automating and facilitating the game testing process. The first part of the dissertation focuses on player emotion analysis and proposes tools and methods for automatically processing and summarizing human playtesters' data. The second part of the dissertation concentrates on simulation-based approaches for modeling player experience and behavior to reduce the need for human playtesters. In the first publication, we use deep neural networks for analyzing player facial expression data and provide a visualization tool for inspecting affect changes at game events, which replicates earlier results of physiological emotion analysis. Next, we extend this work by introducing a new dataset of game streamers' emotions in different granularities and considering other input signals like audio and speech for automatic emotion recognition. In the second part of the dissertation, simulation-based methods and reinforcement learning agents are used to predict game difficulty and engagement and capture the relation between these two game metrics. In summary, this dissertation proposes and evaluates methods that advance automatic game testing by proposing approaches for automatic analysis of player emotions, which can be used for selecting specific segments of playtest videos for further inspection. In addition, we have provided accurate models of player experience and behavior using simulation-based methods that can be used to detect problematic game levels before releasing them to the actual players.Item Advances in Analysing Temporal Data(Aalto University, 2017) Kostakis, Orestis; Tietotekniikan laitos; Department of Computer Science; Data Mining Group; Perustieteiden korkeakoulu; School of Science; Gionis, Aristides, Associate Prof., Aalto University, Department of Computer Science, FinlandModern technical capabilities and systems have resulted in an abundance of data. A significant portion of all that data is of temporal nature. Hence, it becomes imperative to design effective and efficient algorithms, and solutions that enable searching and analysing large databases of temporal data. This thesis contains several contributions related to the broad scientific field of temporal-data analysis. First, we present a distance function for pairs of event-interval sequences, together with proofs of important properties, such as that the function is a metric, and a lower-bounding function. An embedding-based indexing method is proposed for searching through large databases of event-interval sequences, under this distance function. Second, we study the problem of subsequence search for event-interval sequences. This includes hardness results, an exact worst-case exponential-time algorithm, two upper bounds and a scheme for approximation algorithms. In addition, an equivalence is established between graphs and event-interval sequences. This equivalence allows to derive hardness results for several problems of event-interval sequences. Most importantly, it raises the question which techniques, results, and methods from each of the fields of graph mining and temporal data mining can be applied to the other that would advance the current state of the art. Third, for the problem of subsequence search, we propose an indexing method based on decomposing event-interval sequences into 2-interval patterns. The proposed indexing method is benchmarked against other approaches. In addition, we examine different variations of the problem and propose exact algorithms for solving them. Fourth, we describe a complete system that enables the clustering of a stream of graphs. The graphs are clustered into groups based on their distances, via approximating the graph edit distance. The proposed clustering algorithm achieves a good clustering with few graph comparisons. The effectiveness and usefulness of the systems is demonstrated by clustering function call-graphs of binary executable files for the purpose of malware detection. Finally, we solve the problem of summarising temporal networks. We assume that networks operate in certain modes and that the total sequence of interactions can be modelled as a series of transitions between these modes. We prove hardness results and provide heuristic procedures for finding approximate solutions. We demonstrate the quality of our methods via benchmarking and performing case-studies on datasets taken from sports and social networks.