Browsing by Department "Matematiikan ja systeemianalyysin laitos"
Now showing 1 - 20 of 239
- Results Per Page
- Sort Options
- Saavutettavuuspohjainen matkojen suuntautumisen ja kulkutavan valinnan simulointimalli
School of Science | Master's thesis(2011) Salomaa, OsmoTransportation models are needed to assess the impact of planned transportation and land use projects and operations. Models currently in use are to a large extent outdated and theoretically problematic and are poorly applicable to many current problems. The goal of this work is to develop a new kind of transportation model, one that takes trip chains into account, includes a time-dimension and in which ordinary daily trips and related decision-making is handled by simulation. The model is implemented for the Helsinki commuting area using the spatial precision of a regular 250 meter grid. The main purpose of the model is to investigate how changes in land use and transportation system affect traveling. Results from the model are examined for the present day and for the forecasted year 2035 according to a draft of the 2011 Helsinki region transport system plan. According to the results, the model looks promising and its strengths and limitations have been identified for further development and use. The model is applicable in the short term especially for applications that require spatial precision or inclusion of pedestrian and bicycle trips in analysis and in the long term it can be considered a possible candidate to replace traditional aggregate models. - Adaptive Emotion Based Decision Model for a Social Robot
Perustieteiden korkeakoulu | Master's thesis(2012) Sutinen, MarttiThis thesis introduces a computational model of emotions and decisions for a robot, which interacts meaningfully in a social context. The decision making framework is based on multi-attribute utility theory, but it contains a dynamic and adaptive emotional model which basically acts as a preference and perception manipulator. The emotional model is based on event appraisal with discrete emotion categories. Events are assessed using dimensions of utility and probability as well as expectations. The model uses the concepts of core affect and attributed affect to create a multilevel emotion consisting of moods and emotional events. Personality traits are used to create different emotional dynamics by modifying relevant parameters. Attitudes and relationships, understood through attributed affect and classical conditioning, make the robot emotions more believable. The robot learns from user actions and makes predictions about them and environment changes according to probabilistic models. Subjective well-being and human need hierarchy are used as the basis for the preferences which the visceral state affects. The model is inspired by the computational models Cathexis, FLAME, EMA, TAME and Roboceptionist, and is an expanded version of the model used in AISoy1 robot. The framework combines extensive psychological research and requires validation - Advancing incorporation of expert knowledge into Bayesian networks
School of Science | Doctoral dissertation (article-based)(2022) Laitila, PekkaBayesian networks (BNs) are used in many areas to support risk management and decision-making under uncertainty. A BN represents probabilistic relationships of variables and allows to explore their interaction through various types of analyses. In applications, a lack of suitable data often necessitates that a BN is constructed at least partly based on the knowledge of a domain expert. Then, in order to manage limited time and the cognitive workload on the expert, it is vital to have efficient means to support the construction process. This Dissertation elaborates and improves so-called ranked nodes method (RNM) that is used to quantify expert views on the probabilistic relationships of variables, i.e., nodes, of a BN. RNM is designed for nodes with discrete ordinal scales. With such nodes, the relationship of a descendant node and its direct ancestors is defined in a conditional probability table (CPT) that may consist of dozens or hundreds of conditional probabilities. RNM allows the generation of the CPT based on a small number of parameters elicited from the expert. However, the effective use of RNM can be difficult due to a lack of exact guidelines concerning the parameter elicitation and other user-controlled features. Furthermore, there remains ambiguity regarding the underlying theoretical principle of RNM. In addition, a scarcity of knowledge exists on the general ability of CPTs generated with RNM to portray probabilistic relationships appearing in application areas of BNs. The Dissertation advances RNM with regard to the above shortcomings. The underlying theoretical principle of RNM is clarified and experimental verification is provided on the general practical applicability of the method. The Dissertation also presents novel approaches for the elicitation of RNM parameters. These include separate designs for nodes whose ordinal scales consist of subjective labeled states and for nodes formed by discretizing continuous scales. Two novel approaches are also presented for the discretization of continuous scales of nodes. The first one produces static discretizations that stay intact when a BN is used. The other one involves discretizations updating dynamically during the use of the BN. The theoretical and experimental insight that the Dissertation provides on RNM clears the way for its further development and helps to justify its deployment in applications. In turn, the novel elicitation and discretization approaches offer thorough and well-structured means for easier as well as more flexible and versatile utilization of RNM in applications. Consequently, the Dissertation also facilitates and promotes the effective and diverse use of BNs in various domains. - Algebraic Aspects of Hidden Variable Models
School of Science | Doctoral dissertation (article-based)(2023) Ardiyansyah, MuhammadHidden variables are random variables that we cannot observe in reality but they are important for understanding the phenomenon of our interest because they affect the observable variables. Hidden variable models aim to represent the effect of the presence of hidden variables which are theoretically thought to exist but we have no data on them. In this thesis, we focus on two hidden variable models in phylogenetics and statistics. In phylogenetics, we seek answers to two important questions related to modeling evolution. First, we study the embedding problem in the group-based models and the strand symmetric model and its higher order generalizations. In Publication I, we provide some embeddability criteria in the group-based models equipped with certain labeling. In Publication III, we characterize the embeddability in the strand symmetric model. These results allow us to measure approximately the proportion of the set of embeddable Markov matrices within the space of Markov matrices. These results generalize the previously established embeddability results on the Jukes-Cantor and Kimura models. The second question of our interest concerns with the distinguishability of phylogenetic network models which is related to the notion of generic identifiability. In Publication II, we provide some conditions on the network topology that ensure the distinguishability of their associated phylogenetic network models under some group-based models. The last part of this thesis is dedicated to studying the factor analysis model which is a statistical model that seeks to reduce a large number of observable variables into a fewer number of hidden variables. The factor analysis model assumes that the observed variables can be presented as a linear combination of the hidden variables together with some error terms. Moreover, the observed and the hidden variables together with the error terms are assumed to be Gaussian. We generalize the factor analysis model by dropping the Gaussianity assumption and introduce the higher order factor analysis model. In Publication IV, we provide the dimension of the higher order factor analysis model and present some conditions under which the model has positive codimension. - The algebraic square peg problem
Perustieteiden korkeakoulu | Master's thesis(2014) van Heijst, WouterThe square peg problem asks whether every continuous curve in the plane that starts and ends at the same point without self-intersecting contains four distinct corners of some square. Toeplitz conjectured in 1911 that this is indeed the case. Hundred years later we only have partial results for curves with additional smoothness properties. The contribution of this thesis is an algebraic variant of the square peg problem. By casting the set of squares inscribed on an algebraic plane curve as a variety and applying Bernshtein's Theorem we are able to count the number of such squares. An algebraic plane curve defined by a polynomial of degree m inscribes either an infinite amount of squares, or at most (m4 - 5m2 + 4m)= 4 squares. Computations using computer algebra software lend evidence to the claim that this upper bound is sharp for generic curves. Earlier work on Toeplitz's conjecture has shown that generically an odd number of squares is inscribed on a smooth enough Jordan curve. Examples of real cubics and quartics suggest that there is a similar parity condition on the number of squares inscribed on some topological types of algebraic plane curves that are not Jordan curves. Thus we are led to conjecture that algebraic plane curves homeomorphic to the real line inscribe an even number of squares. - Algebraic Statistics
School of Science | Doctoral dissertation (article-based)(2013) Norén, PatrikThis thesis on algebraic statistics contains five papers. In paper I we define ideals of graph homomorphisms. These ideals generalize many of the toric ideals defined in terms of graphs that are important in algebraic statistics and commutative algebra. In paper II we study polytopes from subgraph statistics. Polytopes from subgraph statistics are important for statistical models for large graphs and many problems in extremal graph theory can be stated in terms of them. We find easily described semi-algebraic sets that are contained in these polytopes, and using them we compute dimensions and get volume bounds for the polytopes. In paper III we study the topological Tverberg theorem and its generalizations. We develop a toolbox for complexes from graphs using vertex decomposability to bound the connectivity. In paper IV we prove a conjecture by Haws, Martin del Campo, Takemura and Yoshida. It states that the three-state toric homogenous Markov chain model has Markov degree two. In algebraic terminology this means that a certain class of toric ideals are generated by quadratic binomials. In paper V we produce cellular resolutions for a large class of edge ideals and their powers. Using algebraic discrete Morse theory it is then possible to make many of these resolutions minimal, for example explicit minimal resolutions for powers of edge ideals of paths are constructed this way. - Analyses of Flight to Quality in Financial Markets
Helsinki University of Technology | Master's thesis(2008) Ehrnrooth, MarkusFrom time to time prices of nancial instruments react strongly and simultaneously. These sharp market movements are generally symptoms of risk aversion among investors and are called ight to quality, because investors rush to buy safe assets during them. This thesis studies the application of volatility- and correlation estimates derived from high frequency nancial data to predict such ights to quality. The empirical part of the thesis is based on 5 minute data sampled from the period September 2004 to March 2008. Volatility- and correlation estimators based on high frequency data are presented and estimated. The challenges posed by high frequency data, as well as methods for mitigating their ects, are presented. The results concerning ight to quality prediction using the estimators built show that these may not be used reliably for ight to quality prediction. The indicators may however prove useful in more general risk management. - Analyticity of point measurements in inverse conductivity and scattering problems
School of Science | Doctoral dissertation (article-based)(2013) Seiskari, OttoInverse conductivity and Helmholtz scattering problems with distributional boundary values are studied. In the context of electrical impedance tomography (EIT), the considered concepts can be interpreted in terms of measurements involving point-like electrodes. The notion of bisweep data of EIT, analogous to the far-field pattern in scattering theory, is introduced and applied in the theory of inverse conductivity problems. In particular, it is shown that bisweep data are the Schwartz kernel of the relative Neumann-to-Dirichlet map, and this result is employed in proving new partial data results for Calderon's problem. Similar techniques are also applied in the scattering context in order to prove the joint analyticity of the far-field pattern. Another recent concept, sweep data of EIT, analogous to the far-field backscatter data, is studied further, and a numerical method for locating small inhomogeneities from sweep data is presented. It is also demonstrated how bisweep data and conformal maps can be used to reduce certain numerical inverse conductivity problems in piecewise smooth plane domains to equivalent problems in the unit disk. - Approaches to Group Decision Support in Robust Portfolio Modeling
Helsinki University of Technology | Master's thesis(2008) Vilkkumaa, EevaMethods of multi-criteria decision analysis (MCDA) provide normative support for decision making processes. Out of these methods, Robust Portfolio Modeling (RPM) helps decision makers (DMs) to select the best possible subset or portfolio of available projects in the presence of scarce resources. The RPM methods provide a computationally efficient way of generating non- dominated portfolios by accommodating incomplete preference information. This Thesis extends the RPM methods to group decision making context, where the different and possibly conflicting interests of multiple DMs need to be synthesized to obtain a satisfactory compromise. As in the single DM case, the information may be given with the preferred accuracy, and it is accommodated by set inclusion without the need for averaging or randomization. The methods also allow the group weight information describing the possible inequality in the DMs' influence to be incomplete. A distinction between individual and joint approaches is made. In the individual approach the group choice is based on the DMs' individually non-dominated sets. The joint approach seeks to aggregate the DMs' individual information into a single preference model. A few decision rules with method-specific variants are presented for generating decision recommendations. Based on an illustrative case study, the methods seem computationally efficient and provide robust decision recommendations. As in the single DM case, the group RPM methods are transparent and do not necessarily require mathematical expertise from the DMs. Owing to computational efficiency, multiple iterations may be carried out, whereby additional preference information can be elicited and utilized as the process evolves. While the main contribution of this Thesis is the development of efficient group methods for portfolio selection problems with incomplete information, also methods for generating group weight information sets based on the DMs' assessments are presented. - Approximations and Surrogates for Computational Inverse Boundary Value Problems
School of Science | Doctoral dissertation (article-based)(2017) Mustonen, LauriInverse boundary value problems are closely related to imaging techniques where measurements on the surface are used to estimate, or reconstruct, inner properties of the imaged object. In this thesis, improved reconstruction methods and new computational approaches are presented for elliptic and parabolic inverse boundary value problems. Two imaging applications that are addressed are electrical impedance tomography (EIT) and thermal tomography. The inverse problems that are considered are nonlinear and the reconstructions are sought by least squares minimization. Algorithms for such minimization often rely on iterative evaluation of the target function and its partial derivatives. In this thesis, we approximate these target functions, which themselves are solutions to partial differential equations, with polynomial surrogates that are simple to evaluate and differentiate. In the context of EIT, this method is used to estimate the shape of the object in addition to its electrical properties. The method is also shown to be feasible for thermal tomography, including the case of uncertain object shape. We also present a novel logarithmic linearization method for EIT. Transforming the voltage measurements in a certain logarithmic way reduces the nonlinearity in the relationship between the electrical properties and the measurements, allowing a reconstruction with fewer or only one minimization step. Furthermore, we propose a modification to the complete electrode model for EIT. The new model is shown to be compatible with experimental measurement data, while the increased regularity of the predicted electromagnetic potential improves convergence properties of numerical methods. - Assessment and learning styles in engineering mathematics education
Perustieteiden korkeakoulu | Licentiate thesis(2012) Havola, LindaAalto-yliopiston perustieteiden korkeakoulussa on aktiivisesti kehitetty matematiikan opetusta 2000-luvun alkupuolelta lähtien. Tässä tutkimuksessa tarkasteltiin ensimmäisen vuoden insinöörialan opiskelijoiden matematiikan opintomenestykseen vaikuttavia tekijöitä. Tarkasteltavia tekijöitä olivat matematiikan lähtötaso, tietokoneavusteinen arviointi ja oppimistyylit. Automaattisesti tarkastettavien tehtävien STACK-järjestelmä on ollut käytössä Aalto-yliopiston matematiikan peruskursseilla vuodesta 2006 lähtien. Tutkimuksessa tarkasteltiin järjestelmän vaikuttavuutta. Tulosten perusteella järjestelmän käyttö on lisännyt joustavuutta kursseilla. Myös laskuharjoitusten vaikutusta kurssin arvosanaan on voitu lisätä. STACK-järjestelmää käyttäen toteutettiin myös matematiikan perustaitotesti, joka perustuu Tampereen teknillisessä yliopistossa aikaisemmin käytössä olleisiin kysymyksiin. Testi sisältää 16 kysymystä lukiomatematiikan insinöörialan kannalta tärkeimmistä osa-alueista. Vuodesta 2008 lähtien ensimmäisen vuoden Aalto-yliopiston teknillisten alojen opiskelijat ovat vastanneet perustaitotestiin syksyn alussa. Tulosten perusteella opiskelijoilla on vaikeuksia esimerkiksi logaritmin käsitteen ymmärtämisessä. Uusilla opiskelijoilla oli mahdollisuus vastata myös syksyinä 2009 ja 2010 oppimistyylikyselyyn, joka perustui Felderin ja Solomanin Index of Learning Styles Questionnaire -kyselyyn. Vastausten perusteella suurin osa Aalto-yliopiston insinöörialojen opiskelijoista on visuaalisia ja aistivia oppijoita. Sarjallinen/globaali ja aktiivinen/reflektiivinen -ulottuvuuksissa opiskelijat olivat jakautuneet tasaisemmin. Tutkimuksesta saatujen tulosten perusteella on tarkoitus kehittää insinöörialan matematiikan opetusta. Joitakin kehittämistoimenpiteitä, kuten aktivoivien opetusmenetelmien kokeilu, on jo toteutettu. - Suomen kansallisten päästövähennystoimien riskien ja kustannustehokkuuden arviointi
School of Science | Master's thesis(2011) Hast, AiraAccording to the EU climate and energy package Finland should reduce greenhouse gas (GHG) emissions in national, non-trading sectors (non-ETS) at least 16 % below 2005 levels by 2020. In order to meet this target Finland has to implement GHG abatement activities. A situation where mitigation costs should be as low as possible is studied in this Thesis. To minimize the costs, mitigation activities with the best cost-efficiency should be chosen when forming optimal abatement portfolios. However, the amount of GHG reductions and costs are uncertain with every abatement activity and therefore portfolios involve risks to reduce emissions less than predicted or cause higher costs than estimated beforehand. The objective of this Thesis is to build portfolios which fulfil the reduction target. Each portfolio consists of activities that are chosen to be implemented in the examined timeline 2010-2020 and also the year they will be implemented. In this Thesis abatement activities are chosen among 17 independent mitigation actions. To form an optimal portfolio for different levels of GHG reductions a stochastic optimization model is built. The amounts of the risks related to the costs and reductions in different portfolios are then compared. The results from this analysis show that uncertainties in costs and reductions are almost equal in every examined efficient portfolio, when the reductions are gained by national mitigation actions. It also seems that the risk to reduce emissions less than expected cannot be lowered even if the expected value of costs is raised. The probability to meet the reduction target seems to depend strongly on the expected value of costs, so that a higher probability to meet the target involves higher costs. In addition to this it can also be seen that increasing probability to meet the target in higher standard of probability requires higher relative costs than in lower standard of probability because marginal abatement costs are increasing in function of gained reductions. The results show that some abatement actions are chosen in nearly all efficient portfolios while other actions are chosen extremely seldom. To meet the reduction target the Member States of EU can trade their non-ETS allocations. Two cases are compared to each other in this Thesis. In the first case the Member States have to meet the target by national mitigation actions, and in the second case the Member States can also trade non-ETS allocations in 2020. The possibility of trading allocations changes the set of implemented actions and postpones their optimal timings because optimization is done by minimizing the present value of overall costs. The portfolios that minimize overall costs are also studied so that comparison is made by examining how the costs, reductions gained by national mitigation actions and uncertainties related to them differ from each other in these two cases. The results prove that overall costs are approximately 10 % lower when the Member States can trade allocations. On the other hand, when trading allocations is possible, the risk to exceed the expected costs increases because the price of allocation unit is very uncertain. Sensitivity analysis is performed for cost minimizing portfolios in different cases so that overall costs and gained reductions are studied separately. Uncertainty in gained reductions is caused by the same sources in both cases. Yet, the variables causing uncertainty in costs are somewhat different in different cases. - Automatic Assessment in University-level Mathematics
Perustieteiden korkeakoulu | Master's thesis(2009) Ruokokoski, JarnoIn this thesis we study automatic assessment of randomized, university mathematics exercises. We use one system meant for this purpose and implement exercise collections from different mathematical topics. The selected topics of mathematics are calculus and graph theory. More of something about mathematical proving is presented. The topics were selected to fit for teaching university level mathematics in the department of automation and systems technology and the department of computer science and engineering. Calculus is assumed to be familiar, but graph theory are presented that understanding is possible without earlier knowledge. We wrote about 80 exercises for this thesis. We used the latest technical vations in automatic assessment in the writing of the exercises, and we tested received results in two courses, in which participate about 300 students during the semester 2008-2009. We will present the most interesting exercises, results of the experimentations and received feedback in this thesis. - Diskreetin maksimaalifunktion perusominaisuudet
School of Science | Master's thesis(2011) Ropponen, JonatanThe Hardy-Littlewood maximal function is one of the most essential operators in real analysis and harmonic analysis but is not always sufficiently regular. The Hardy-Littlewood maximal function maps the functions of Lp-space into Lp-space but does not retain the continuity or Hölder continuity of functions, for instance. It would be beneficial to define a maximal function that is of the same magnitude as the Hardy-Littlewood maximal function and has the desired regularity properties. For this purpose, in this master's thesis we define the discrete maximal function, which also retains continuity and Hölder continuity. The discrete maximal function can be used in relatively general metric spaces. For the purposes of this work, the most important assumption of the examined metric space is that the used measure is doubling. Then the measures of spheres with the same center are proportional to each other. The measure being doubling enables the use of numerous tools for the analysis of metric spaces but is not too demanding as an assumption, since there are plenty of suitable spaces. The discrete convolution is used in defining the discrete maximal function. Similarly, the Hardy-Littlewood maximal function can be defined in real spaces with the ordinary convolution, but the ordinary convolution is not defined in general metric spaces. Of the properties of the discrete maximal function, we examine in particular Lp-properties and Hölder continuity. The determined properties are compared to the properties of the Hardy-Littlewood maximal function. We also show that the discrete maximal function is of the same magnitude as the Hardy- Littlewood maximal function. The methods used in the work include in particular results derived from the doubling measure and essential covering theorems, such as the Vitali and Whitney covering theorems. In this work, we examine the discrete maximal function in both the global and local case. We then notice that many of the properties are relatively similar in both cases. Therefore, the discrete maximal function can be limited to a subset of the examined space without losing important properties. - Bayesian Games for Analysis of Asymmetric Warfare Tactics
School of Science | Master's thesis(2011) Eskola, OlliRoadside bombs have caused more than half of the casualties of the U.S. troops in Afghan war. The aim of this work is to examine the threats of the modern asymmetric warfare and to develop operations research based methods for advanced threat preparation. The main emphasis is on the roadside bombs. In this work, we are investigating the conflict between two forces. We define an influence diagram that contains the events relevant after the initialization of the bomb. The social support of the local people is included in the model because it is a key factor in asymmetric warfare. Following Bayesian game theory, both forces have two types and the types define the tactics of the forces. The types are given either as a joint common probability distribution or as subjective conditional probabilities. For the conditional probabilities, a theorem is formulated to define when they are mutually consistent. When the conditional probabilities are consistent, the common joint distribution can be calculated using the Bayes rule. When the subjective probabilities are mutually inconsistent, the Kullback-Leibner divergence measure is used for calculating the common joint probability distribution that is closest in terms of this measure, given the subjective probabilities. An algorithm solving the Bayesian Nash Equilibrium of the normal form game is programmed. The equilibrium shows the best response, i.e. the tactics adopted by players maximizing expected utility. By assumption, the utilities sum up to constant. The influence diagram combined with carefully elicited expert judgments provides a systematic approach for modeling the existing uncertainties related to the outcomes of the applied tactics. Several computational cases are examined. Based on these, the social support, for example, is more significant than the type probabilities of the forces. - Bayesian networks, influence diagrams, and games in simulation metamodeling
Perustieteiden korkeakoulu | Doctoral dissertation (article-based)(2011) Poropudas, JirkaThe Dissertation explores novel perspectives related to time and conflict in the context of simulation metamodeling referring to auxiliary models utilized in simulation studies. The techniques innovated in the Dissertation offer new analysis capabilities that are beyond the scope of the existing metamodeling approaches. In the time perspective, dynamic Bayesian networks (DBNs) allow the probabilistic representation of the time evolution of discrete event simulation by describing the probability distribution of the simulation state as a function of time. They enable effective what-if analysis where the state of the simulation at a given time instant is fixed and the conditional probability distributions related to other time instants are updated revealing the conditional time evolution. The utilization of influence diagrams (IDs) as simulation metamodels extends the use of the DBNs into simulation based decision making and optimization. They are used in the comparison of decision alternatives by studying their consequences represented by the conditional time evolution of the simulation. For additional analyses, random variables representing simulation inputs can be included in both the DBNs and the IDs. In the conflict perspective, the Dissertation introduces the game theoretic approach to simulation metamodeling. In this approach, existing metamodeling techniques are applied to the simulation analysis of game settings representing conflict situations where multiple decision makers pursue their own objectives. Game theoretic metamodels are constructed based on simulation data and used to study the interaction between the optimal decisions of the decision makers determining their best responses to each others' decisions and the equilibrium solutions of the game. Therefore, the game theoretic approach extends simulation based decision making and optimization into multilateral settings. In addition to the capabilities related to time and conflict, the techniques introduced in the Dissertation are applicable for most of the other goals of simulation metamodeling, such as validation of simulation models. The utilization of the new techniques is illustrated with examples considering simulation of air combat. However, they can also be applied to simulation studies conducted with any stochastic or discrete event simulation model. - Bayesian Optimal Experimental Design in Imaging
School of Science | Doctoral dissertation (article-based)(2023) Puska, Juha-PekkaAn inverse problem is defined as a problem that violates one of the classical criteria of a well posed problem: a solution exists, is unique, and is continuous with respect to the data in some reasonable topology. A problem that is not well posed is called illposed, and the development of tools to tackle illposed problems is the goal of the field of inverse problems research. In imaging, illposedness is often an inevitable consequence of the high dimension of the unknown, compared with the measurement data. In an imaging problem, one aims to reconstruct the spatial two- or three-dimensional structure of an object of interest, leading to unknown parameters in the hundreds of thousands or beyond, while the dimension of the measurement data is determined by the number of sensors, and thus limited by physical constraints to values often at least an order of magnitude lower. Another consequence of the high dimensionality of the problem is the computational cost involved in the computations. In imaging problems, there is also usually a cost involved in acquiring data, and thus one would naturally want to minimize the amount of data collection required. One tool for this is optimal experimental design, where one aims to perform the experiment in a way as to maximize the value of the data obtained. The challenge of this however, is that the search for this optimal design usually leads to a computationally challenging problem, whose size is dependent on the dimension of both the data and the unknown. Overcoming this difficulty is the main objective of this thesis. The problem can be tackled by using Gaussian approximations in the formulation of the imaging problem, which leads to practical solution formulas for the quantities of interest. In this thesis, tools are developed to enable the efficient computation of expected utilities for certain measurement designs, particularily in sequential imaging problems and for non-Gaussian prior models. Additionally, these tools are applied to medical imaging and astronomy. - Being better better : living with systems intelligence
School of Science | D4 Julkaistu kehittämis- tai tutkimusraportti tai -selvitys(2014) Hämäläinen, Raimo P.; Jones, Rachel; Saarinen, Esa - Benchmarking-mittaristo sosiaali- ja terveydenhuollon tuotannonohjaukseen
Helsinki University of Technology | Master's thesis(2009) Niemelä, Pyry-MattiBenchmarking is a developing prosess of organizations based on measuring and comparing. It is suitable for the social and healthcare sector. Althought the social and healthcare sector is wide it is possible to develop generic benchmarking service for the demand of the sector. This benchmarking service requires a functional performance measurement system for the sector. The purpose of this thesis is to develop this system. This thesis deals with production planning and control of the social and healthcare sector, efficiency analysis and theory and practices of the benchmarking method. Based on the theory three frameworks for the performance measurement system is introduced. Based on the selected framework the performance measure system is drafted. The system is tested in the benchmarking project of special care for people with intellectual disabilities. This thesis introduces a cost-effectiveness chart. The chart is used as the framework of the performance measurement system. A few means of analysis to apply the system is introduced. In the case of the special care the performance measurement system is found to be functional at least for service structure benchmarking. This thesis demonstrates that it is possible to develop a generic performance measurement system for benchmarking of the social and healthcare sector. The system is suitable for a multitude of purposes within the sector. The generic performance measurement system increases the efficiency and quality of benchmarking services. - Best bilinear shell element: flat, twisted or curved?
Doctoral dissertation (article-based)(2009) Niemi, Antti HThis thesis concerns the accuracy of finite element models for shell structures. The focus is on low-order approximations of layer and vibration modes in shell deformations with particular reference to problems with concentrated loads. It is shown that parametric error amplification, or numerical locking, arises in these cases when bilinear elements are used and the formulation is based on the so-called degenerated solid approach. Furthermore, an alternative way for designing bilinear shell elements is discussed. The procedure is based on a refined shallow shell model which allows for an effective coupling between the membrane and bending strain in the energy expression.