# 05. Harjoitustyöt ja kurssitutkielmat / Coursework and Term papers, Final projects

## Permanent URI for this community

Yliopistossa suoritettujen opintojen harjoitus- ja lopputöitä / Coursework, term papers and final projects completed at the university

## Browse

### Browsing 05. Harjoitustyöt ja kurssitutkielmat / Coursework and Term papers, Final projects by Department "Department of Applied Physics"

Now showing 1 - 6 of 6

###### Results Per Page

###### Sort Options

Item Capturing Higgs boson pairs in the CMS hardware trigger using muon-enriched jets(Aalto University School of Science, 2022) Salmi, Onni; Teknillisen fysiikan laitos; Department of Applied Physics; Helsinki Institute of Physics; Fysiikan tutkimuslaitos; Perustieteiden korkeakoulu; School of ScienceAfter the concluding discovery of the Higgs boson by CERN at the Large Hadron Collider(LHC) 10 years ago, the interest of high-energy particle physics community has turnedto probing the limits of the Standard Model in the search for new physics. StandardModel predictions, such as Higgs boson self-interaction, and the related Higgs bosonpair production, are interesting in this regard. If the physical parameters related to theHiggs boson pair production, derived from particle collision data, differ in a statisticallysignificant way from the current theoretical predictions, then Beyond the Standard Modelphysics are present. As such, more Higgs boson pair data needs to be obtained in order todraw more reliable conclusions on the parameter values. The Compact Muon Solenoid (CMS) experiment, located at the LHC, detects collisiondata from the LHC proton-proton collisions. Due to the high collision rate, limited readoutand finite data storage capacities, only a fraction of the collision data may be kept foranalysis. These problems are tackled at the CMS using a data filtering system called thetrigger system, which uses various algorithms to discern interesting collision events frombackground and capture them. In addition, since Higgs boson pair production is about1000 times less frequent than single Higgs production, there is a need for precise andcompetent trigger algorithms targeting these kind of events. In this report, a new hardware trigger algorithm for the capture of Higgs boson pairsobserved at the CMS experiment is developed and analyzed. This algorithm makes useof charged elementary particles, muons, produced inside b-quark jets. These are the so-called muon-enriched jets. Prior to the performance analysis of the algorithm, the relevant theoretical aspects related to Higgs boson pair production are presented shortly, after which a cursory overview of the CMS detector is given, followed by algorithm design considerations. Improvement of 5.5% compared to old algorithms at an additional trigger rate of 1.4 kHzis obtained.Item Evaluating the viability of Serpent in Passive Gamma Emission Tomography (PGET) radiation transport simulations(Aalto University School of Science, 2021) Kähkönen, Topias; Teknillisen fysiikan laitos; Department of Applied Physics; Perustieteiden korkeakoulu; School of SciencePassive Gamma Emission Tomography (PGET) has been developed for verification of spent nuclear fuel. To reliably detect missing or substituted fuel pins, verification processes with advanced image reconstruction and classification algorithms are developed. High-fidelity PGET simulations could provide valuable information for the development, and they need accurate modelling of spent nuclear fuel, gamma-radiation, and detector response. This thesis studies the viability of Monte Carlo particle transport code Serpent for PGET modelling, and the objective is to evaluate the viability of gamma-radiation transport in this application. A two-phased analog photon transport was used to simulate flux sinograms. To meet the available time-frame, the transport was divided into two consequent phases and it was benchmarked against a normal one-phased photon transport. The method was consistent with the reference calculation and an efficiency improvement of several factors was obtained. Results were visualized as flux sinograms, from which filtered back projection reconstructions were performed. Simulated reconstructed images were compared to experimental data to qualitatively estimate the performance of the simulation. Results of the simulations were physically sensible, but the framework has to be developed further. To have a fully capable simulation framework, the performance of the radiation transport has to be further increased to make it suitable for simulations of large populations of flux sinograms. The detector response was not simulated in this study, and it has to be implemented to obtain realistic results. Furthermore, once the framework is ready, the simulation has to be validated against other codes or experimental data.Item Implementing Multi-Task Learning for Bayesian Optimization Structure Search(Aalto University, 2020) Sten, Nuutti Akilles; Todorovic, Milica, Research Fellow, Aalto University, Department of Applied Physics, Finland; Remes, Ulpu, Postdoctoral Researcher, University of Helsinki, Department of Mathematics and Statistics, Finland; Teknillisen fysiikan laitos; Department of Applied Physics; Computational Electronic Structure Theory (CEST); CEST-tutkimusryhmä; Perustieteiden korkeakoulu; School of Science; Rinke, Patrick, Associate Professor, Aalto University, Department of Applied Physics, FinlandMachine learning algorithms are highly dependent on the quality of the training data. Problems with the data, like accuracy and expenses in gathering it project directly to the cost and performance of the algorithm. Multi-task learning methods try to overcome this problem by combining information from multiple sources of data. In this study I focused on two existing methods for multi-output Gaussian processes, linear model of coregionalisation and intrinsic coregionalisation model. My aim in this study was to implement these algorithms to Bayesian Optimization Structure Search (BOSS) tool and test their potential for accelerating atomistic structure search space navigation with a simulation study. I found potential for reducing computational cost of optimization of expensive structure searches with multi-task learning.Item Trade off between the level of accuracy and required computational resources for hybrid functional NAO DFT calculations(Aalto University School of Science, 2018) Peltola, Aku; Teknillisen fysiikan laitos; Department of Applied Physics; Electronic Properties of Materials; Perustieteiden korkeakoulu; School of ScienceDensity functional theory (DFT) is a widely used theory for studying the electronic structureof matter [2]. Focus of this special assignment is on DFT-calculations performed via useof numerically tabulated atom-centered orbitals (NAOs) as implemented the Fritz HaberInstitute “ab initio molecular simulations” (FHI-aims) computer program package. The focus of this work is finding a balance between accuracy and use of the computationalrecourses. This problem arises especially for bigger periodic system calculations performedvia use of the hybrid exchange–correlation functionals. This is done via producing newsettings for the calculations called intermediate-settings. Simulations performed via useof the intermediate-settings should sit nicely between the pre-existing light- and the tight-settings in both level of convergence of the calculation and the computational resourcesused. Three different methods were used while producing the new a intermediate settings for Na,K, Pt, Mo, Mn and Cr. First being decrease of the pool of basis functions, second beingadjustment to confining-potential and third being use of auxiliary basis functions. The per-formance of the new intermediate-settings is verified in five different chemical environmentsand the accuracy should translate to other chemical environments as well.Item Variance reduction for collimated gamma detector geometry in Serpent(Aalto University School of Science, 2022) Kähkönen, Topias; Teknillisen fysiikan laitos; Department of Applied Physics; Perustieteiden korkeakoulu; School of ScienceThe potential of modeling the photon transport in passive gamma emission tomography (PGET) with Serpent is restricted by the computational demand of the simulation using conventional particle tracking routines. However, the analog tracking process can be altered to improve the computational efficiency of the Monte Carlo simulation. In this thesis, a variance reduction scheme utilizing splitting and modified direction sampling is developed and implemented in Serpent. The implementation is verified in a simple test geometry and the method is demonstrated in a PGET gamma-radiation transport. As a result, a factor of 13 or greater improvement was successfully obtained compared to the analog simulation. On the contrary, further development would be required to provide a user interface for input parameter adjustments needed for any generalizations of the method.Item Verification and Sensitivity Analysis of Maximum Likelihood Estimation for Loviisa NPP Seismic Hazard(Aalto University School of Science, 2023) Heikkilä, Laura; Koskenranta, Jukka; Leppänen, Timo; Teknillisen fysiikan laitos; Department of Applied Physics; Perustieteiden korkeakoulu; School of Science; Ala-Heikkilä, JarmoProbabilistic seismic hazard analysis (PSHA) is used for estimating the risk of earthquakes to nuclear power plants. To achieve a required level of safety, the recurrence of earthquakes of different magnitudes has to be known. This is done by estimating the parameters a and b in the Gutenberg-Richter equation. The purpose of this work is to verify the use of maximum likelihood estimation (MLE) method in the estimation of earthquake recurrence in Finland and perform a sensitivity analysis for this method. The verification and the sensitivity analysis are performed by comparing the estimated values of parameters a and b. The least squares (LS) method is included for comparison. The verification gives results similar to those from previous studies for both of the used methods. For the sensitivity analysis, changing completeness times, minimum and maximum magnitude and the width of the magnitude bins is tested. Both of the methods seem to be the most sensitive to changing the minimum magnitude, but the lack of high-magnitude events and the incompleteness of the data from the smallest earthquakes increase the uncertainty of the estimation. At the end of this paper, parameters giving the most reliable results are suggested.