[dipl] Perustieteiden korkeakoulu / SCI

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 20 of 4926
  • Item
    Dependence of Neuronal Entrainment on Individual Brain Dynamics – A Joint Experimental and Computational Modelling Study
    (2024-03-11) Repo, Joona; Palva, Satu; Perustieteiden korkeakoulu; Palva, Matias
    20 to 60% of people suffering from mental disorders do not respond to conventional treatments. Repetitive transcranial magnetic stimulation (rTMS) has emerged as a non-invasive, FDA-approved alternative for some treatment resistant mental disorders. However, the efficacy of rTMS varies with response rates estimated between 40 and 60%. Personalized rTMS protocols are developed as a response for these low response rates, but current research has not been able to achieve this. Neuronal entrainment, the ability of rTMS to elicit cortical oscillations, has been linked to the neuroplastic effects of rTMS required for treatment response. Therefore, linking individual brain dynamics with entrainment is important. Brain criticality describes the dynamics of the brain operating near the critical point, a phase transition point of a complex system between order and disorder. Detrended fluctuation analysis (DFA) can be used for quantifying long-range temporal correlations, a hallmark of criticality. This thesis explores the relationship of criticality and entrainment by comparing entrainment responses, measured with the phase-locking factor (PLF) with DFA for 19 subjects who were administered 10 Hz rTMS with concurrent electroencephalography. The dependence is further explored with virtual rTMS experiment with a hierarchical Kuramoto model. The findings suggest that entrainment decreases and entrainment spreading increases closer to the critical point. The modeling supports these findings, together emphasizing the possibility of utilizing individual brain dynamics in rTMS personalization.
  • Item
    Utilization of local large language models for business applications
    (2024-03-11) Haaralahti, Elias; Shroff, Mickey; Perustieteiden korkeakoulu; Laaksonen, Jorma
    Large Language Models (LLMs) have gained popularity in various use cases due to their capabilities. Currently third party services are commonly used, but these solutions contain drawbacks, such as data privacy concerns. Due to this, the interest for local solutions has increased causing international companies to release their own models, which are comparable to closed solutions. This thesis explores how local large language models can be utilized for business applications. The goal of this thesis is to form a comprehensive view of the state of LLMs, including their capabilities and limitations by researching them from various sources. Additionally, experiments are conducted to analyze the inference requirements, assess the impact of quantization on them, evaluate the language capabilities of the models and determine their capability to follow instructions and generate coherent output. The experiments include applying Retrieval Augmented Generation (RAG) using internal company data and fine-tuning a model to improve language capabilities with limited computational resources. As a part of research, a customized method was created and is used to evaluate the effectiveness of retrieval augmented generation. This is done by automatically creating a question-answer dataset with over a thousand entries. The dataset can be used by an LLM to evaluate the factuality and relevance of the context or the model output. The result of the thesis is a comprehensive study of current LLMs, tools and methods, which can be applied as a foundation to build new products in the future. The results indicate that LLMs are suitable for many use cases, although they do have limitations.
  • Item
    Pair trading opportunities as a form of maximal extractable value on the Ethereum Network
    (2024-03-13) Eloranta, Antero; Kaila, Ruth; Perustieteiden korkeakoulu; Kivelä, Mikko
    Maximal Extractable Value (MEV) refers to a maximum value block producers can extract from an Ethereum block by reordering, inserting, or censoring transactions. Pair trading is an investment strategy which involves identifying two closely related assets and taking simultaneous long and short positions in them. This strategy aims to generate returns by exploiting price disparities between the two assets, regardless of the broader market’s direction. The purpose of this study is to examine whether pair trading opportunities can be used as a Maximal Extractable Value extraction method on Ethereum Network. The study expands on the existing literature on Maximal Extractable Value by analyzing new kind of extraction method. This study analyzes decentralized exchange transactions between September 2022 and September 2023 to determine the applicability of pair trading logic as a Maximal Extractable Value extraction method. Pair trading opportunities are identified from the transactions following methodology of pair trading literature and pair trading strategy is found to be profitable over a long period, while individual opportunities tend to make a loss. The strategy is also found to remain profitable during the collapse of FTX, bankruptcy of Silicon Valley Bank, exploitation of Curve which were periods of increased market distress. The majority of the opportunities are relatively time sensitive with the median entry window for positions lasting for slightly over 10 minutes, and the median position must be held being 16 minutes before the position reverts or diverges. The study does not find unambiguous evidence of pair trading opportunities’ characteristics changing under increased market distress. Similarly, the study does not find competition among MEV searchers for the pair trading opportunities having increasing or decreasing trend during the observation period.
  • Item
    Natural Language Inference for Hierarchical Zero-Shot Text Classification
    (2024-03-11) Pirhonen, Anton; Kosenkov, Ilia; Perustieteiden korkeakoulu; Marttinen, Pekka
    Text entailment classification has recently emerged as method of performing zero-shot text classification. The method allows classifying textual features to user defined labels, without requiring additional task-specific fine-tuning data. This study aims to investigate the utilization of text entailment classifiers for hierarchical zero-shot text classification. This study fine-tunes text entailment classifiers to perform both supervised and zero-shot hierarchical text classification on a hierarchical Amazon product data set. The experiment results indicate that hierarchical text classification can be transformed into a text entailment task, that the text entailment classifiers can be fine-tuned for hierarchical zero-shot classification on semantically related training data, and that the text entailment model output logits can be utilized for filtering the classifier predictions, improving classification quality. These findings suggest that text entailment classification is an effective method for performing hierarchical zero-shot text classification and demonstrate how the text entailment task can be adapted to hierarchical classification.
  • Item
    Line planning with multiple modalities: Performance analysis of cost-oriented approaches
    (2024-03-12) Jalovaara, Jarkko; Schiewe, Philine; Perustieteiden korkeakoulu; Schiewe, Philine
    A functional public transport system is an important part of any contemporary city, providing affordable transport services for citizens, improving environmental sustainability and increasing the overall welfare of the city. The key elements of all public transport systems are the set of lines, i.e., routes for the vehicles and passengers, since they provide the basis for operating the entire transport system and allow passengers to plan their journeys efficiently. The mathematical models for designing the routes and lines optimally in transport systems give strong support for decision-making in transport planning. However, the large size of real-life transport systems and varying properties of different modes of transport require more efficient models that can be used for real-life instances. In this work, we introduce line planning models and algorithms based on integer optimization for obtaining cost-optimal sets of lines in public transport networks with multiple modes of transport. We formulate the models in terms of external passenger demand and present theoretical analysis based on the properties of public transport networks with multiple modes of transport. Finally, we perform computational experiments using the different models and compare the models using an artificial public transport network and passenger demand. The results show that the integer optimization methods for public transport networks with multiple transport modes give optimal solutions in terms of line operating costs while also maintaining a reasonable passenger travel time. We also confirm that line planning problems with multiple modes can be reduced to a set of single modality problems at the expense of optimality. Overall, we witness a trade-off between tractability and optimality for the different models. Future work remains on analyzing the behaviour of different models in terms of model parameters and different objectives in public transport line planning.
  • Item
    Alidiagnostiikan laajuus ja sen vähentäminen yleis- ja työterveyslääkärien työn johtamisella digitaalisin keinoin yksityisiä terveyspalveluita tuottavassa yrityksessä
    (2024-03-13) Tuuri, Julia; Nyberg-Oksanen, Eeva; Ämmälä, Antti-Jussi; Perustieteiden korkeakoulu; Vuori, Timo
    Laadukas potilaan hoito on oikeanlaista ja oikea-aikaista. Tässä diplomityössä tutkitaan alidiagnostiikan määrää eli kuinka usein lääkärin valitseva diagnostiikka, hoito tai sairauden seuranta on suosituksia suppeampaa. Tutkimus suoritettiin kolmelle esimerkkisairaudelle: tyypin 2 diabetekselle, verenpainetaudille ja hyperkolesterolemialle. Tutkimusten perusteella sekä seurantalaboratoriokokeiden toteutumisessa että vastaanottokäyntien sisällössä ja kirjaamisessa esiintyi merkittävää alidiagnostiikkaa. Alidiagnostiikan vähentämiseksi ja siten hoidon laadun parantamiseksi Terveystalo otti käyttöönsä diagnoosiherätteet. Diagnoosiherätteet ovat ponnahdusikkunoita, jotka tulevat esiin diagnoosikohtaisesti ja tarjoavat diagnoosikohtaisia hoitosuosituksia ja nopean tavan tutkimusten tilaamista varten. Tässä diplomityössä arvioitiin kvalitatiivisesti haastattelujen ja kyselyiden avulla diagnoosiherätteiden potentiaalia parantaa hoitosuositusten toteutumista ja helpottaa lääkärin työtä. Lisäksi arvioitiin diagnoosiherätteisiin liittyvän muutosjohtamisen prosessia. Diagnoosiherätteiden tekemiseen osallistuneet tahot arvioivat diagnoosiherätteillä olevan potentiaalia hoitosuositusten toteutumisen parantamisessa. Lääkärit sen sijaan olivat kriittisempiä ja yleisesti kokivat, etteivät diagnoosiherätteet yllä vielä tavoitteisiinsa etenkin tekniseen toteutukseen liittyvistä syistä. Diagnoosiherätteiden muutosjohtamisprosessi oli aiempaan kirjallisuuteen verrattuna reaktiivisempi. Toisaalta prosessin iteraatiot mahdollistivat muutoksen paremman sopeutumisen organisaatiokulttuuriin. Tutkimustulosten perusteella loin uuden muutosjohtamisen mallin, joka yhdistää Terveystalon prosessissa nähtyä joustavuutta kirjallisuuden mallien tarjoamaan suunnitelmallisuuteen.
  • Item
    Making data-driven decisions – leveraging data-analysis and Business Intelligence in logistics management
    (2024-03-13) Olkkonen, Miika; Salo, Olli-Petteri; Perustieteiden korkeakoulu; Saarinen, Lauri
    Effective logistics management has a significant impact on make-to-order company's business performance and customers' perception. Delivering products on-time can be a challenge for companies providing low-volume, high-variety products. One method for understanding delivery performance is through data analytics. Even though, companies are gathering vast amount of data, decision makers in logistics management are facing a challenge on how to utilize the data for valuable insights. This thesis was done as a case study for a Finnish manufacturing company. It aims to develop a data-driven decision-making tool for a manufacturing company's logistics management to improve on-time delivery performance. Power BI was selected as the platform to develop the analytical solution due to its versatility and simplicity. The requirements for the tool are collected through an iterative development of the solution. The solution provides information to decision-makers that is not available in high-level reports. The value of the tool is to pro-vide insights to decision making that otherwise can be difficult to notice. This thesis suggests that the decision-makers using the solution should be considered when designing an analytical tool. Users may have varying analytical capabilities, so a simple interface is required. Also, the decision-makers need to critically evaluate the data they are analysing due to possible deviations. Future studies are recommended to research more advanced levels of analytics in the logistics performance analytics.
  • Item
    Hyperspectral Image Super-resolution for Remote Sensing using High-Resolution Multispectral Image
    (2024-03-11) Chudasama, Yuvrajsinh; Muhammad, Usman; Perustieteiden korkeakoulu; Laaksonen, Jorma
    Hyperspectral imaging (HSI) has emerged as a pivotal technology in remote sensing, offering unparalleled spectral resolution that enables detailed analysis of the Earth's surface. However, the inherent trade-off between spatial and spectral resolutions in HSIs often limits their practical utility. This thesis addresses the critical challenge of enhancing the spatial resolution of HSIs using state-of-the-art deep learning approaches, with a particular focus on remote sensing applications. The investigation primarily explores the efficacy of Convolutional Neural Networks (CNNs) and Transformers in performing super-resolution on HSIs. Through extensive experiments conducted on a remote sensing dataset, the research assesses these models based on their ability to reconcile the high spectral fidelity of HSIs with enhanced spatial details. Subsequently, the selected methods are compared in terms of six different performance measures, to quantitatively measure the spatial and spectral fidelity of super-resolved images. Additionally, visual analysis techniques such as mean absolute error maps and spectral angle maps are utilized to offer qualitative insights into the models' performance. The findings reveal that Transformer-based models, owing to their proficiency in capturing long-range dependencies, significantly outperform CNNs. This superiority is consistent across different scales of downsampling, underscoring the robustness of Transformer models to resolution degradation. The results of this study provide valuable insights into the current state-of-the-art in hyperspectral image super-resolution, offering guidance for future research in this domain.
  • Item
    Creating value with data architecture in a public sector organization – Applying the Data as a Product approach in data modeling
    (2024-02-19) Kujanpää, Sara; Karjalainen, Tuuli; Perustieteiden korkeakoulu; Turpeinen, Marko
    Legacy systems have been found to restrict organizations and cause technical limitations to business processes. The same issues can recur if these business processes are directly used as requirements for a new system. Replacing legacy systems should, therefore, combine the requirements of business processes and key organizational objectives with technical requirements. In this constructive case study, a logical customer data model was built for a customer relationship management (CRM) system component that will be a part of the target organization’s legacy system renewal project. The Data as a Product (DaaP) approach was chosen as a framework to investigate the target organization’s needs for its customer data. The literature review covers data in an enterprise context, data-driven organizations, data products, and data modeling. Users of existing systems were interviewed to gather requirements for the data model. Expert interviews were also conducted to identify the needs for customer data from a product perspective. Based on the requirements collected, a logical customer data model was built. Combining the DaaP approach with data modeling helped to identify needs and objectives for customer data from both technical and organizational perspectives. The DaaP approach addressed dimensions regarding customer data governance and management as issues related to data specification, quality, ownership, and protection were highlighted. The study extends the understanding of applying the DaaP approach in an operational context and elaborates on how the approach supports data modeling in the context of a technology renewal. The DaaP approach helps implement the renewal in a socio-technical fashion. Moreover, integrating data architecture with the business and application dimensions of enterprise architecture reinforces the value created with data architecture.
  • Item
    Data Lineage in the financial sector
    (2024-03-11) Peuralinna, Jenna; Nuutila, Arto; Perustieteiden korkeakoulu; Lassenius, Casper
    Data lineage, detailing the movement and transformation of data, is increasingly important now that companies possess growing volumes of data. While regulatory requirements serve as a primary driver for data lineage in the financial sector, its relevance extends beyond compliance, offering numerous benefits to organizations. Despite its growing importance, there remains a noticeable lack of expertise and maturity in data lineage practices in the financial sector. Hence, this thesis is conducted in one of the Finnish financial organizations. The goal of this thesis is to demystify data lineage, highlight its benefits, and provide action proposals for the case organization. This thesis includes a literature review, a market analysis of data lineage tools, and a qualitative analysis of semi-structured interviews. Ten professionals in the case organization were interviewed, and the interviews were analysed with grounded theory using the Gioia methodology. The interviews reveal significant challenges in the current state of data lineage within the case organization. These challenges encompass operational and technical aspects, with operational challenges requiring immediate attention. The first step involves establishing standards, definitions, and guidelines, but overall, the actions are concurrent. Additionally, executive support, adequate training, and effective communication and collaboration are crucial for the successful integration of data lineage practices throughout the organization. During the market review of the tools, it became evident that without proper testing or proof of concept it is impossible to determine the suitability of the tools for the case organization. In the end, the issue of data lineage surpasses the mere selection and implementation of tools; it is fundamentally linked to managing organizational change. The thesis concludes with an emphasize on the transformative potential of effective data lineage management. This work contributes valuable insights for organizations navigating the complexities of data lineage, ensuring the integrity, quality, and compliance of their data assets in an ever-evolving digital landscape.
  • Item
    Harnessing customer segmentation for business performance
    (2024-03-13) Pauna, Aleksi; Ottela, Lassi; Perustieteiden korkeakoulu; Seppälä, Timo
    Customer segmentation refers to the process of dividing a population into homogeneous groups based on shared characteristics. It is closely related to customer relationship management (CRM) systems which focus on four objectives: customer identification, acquisition, cultivation, and retention. This thesis focused on applying principles from customer segmentation literature in the context of a gamified e-commerce company operating in North America, seeking to gain an updated understanding of its customer base – particularly its most valuable customers. Since the company does not collect demographic data from customers upon registration, this thesis relied on a customer survey (N=3367) to gather such data. Information was also collected pertaining to traditional segmentation themes that emerged from the literature. Comparable information was obtained from Google Analytics to evaluate response bias. The key findings included the observation that the Pareto-principle appears to sustain its applicability in this context, i.e. a small portion of the customer base contributes to a large share of the gross profit. Differences were discovered between loyal customers and other paying customers in terms of demographic characteristics - mainly related to age, gender, education, employment status, and household income. Comparisons were also made with US Census data. Noteworthy differences were found between behavioural and psychographic dimensions, while geographic characteristics seemed to remain the same across customer segments, mirroring the census data closely. Based on the obtained literature, there appears to be a scarcity of studies interlinking profitability considerations, traditional segmentation characteristics, and the primary objectives of CRM systems, particularly customer identification. This thesis fills this gap by providing a comprehensive case study that merges these principles and offers actionable insights for management.
  • Item
    Personal Space, Crowding and Safety in Populated Social Contexts
    (2024-03-11) Haavisto, Otso; Welsch, Robin; Perustieteiden korkeakoulu; Welsch, Robin
    The environmental factors of personal space (PS) regulation, such as room size, ceiling height and crowdedness, have been studied in the past, but with inconclusive results. Notably, how these factors affect PS regulation in conjunction with each other is yet to be studied. Understanding how the combination of social and environmental factors affects PS regulation and the subjective experience can help design more comfortable spaces and experiences in both physical and virtual contexts. To investigate the socio-spatial factors of PS regulation and room perception, 33 participants were recruited for a study in a virtual reality (VR) setting and tasked to evaluate rectangular rooms of various dimensions and crowd configurations. Participants performed an active-approach stop-distance task, indicating a comfortable conversation distance to a target virtual human in each room. Additionally, participants' subjective impressions of each room configuration were gathered. The work identifies a tendency for people to accommodate others in highly populated settings despite feeling crowded. The results indicate that the equilibrium theory – the prevailing theory used to explain PS regulation – is not valid in social contexts with crowds of people. Rather than increasing their comfort distance from the approach target as distances between crowd members got shorter and their sense of crowding increased, participants partly mimicked the spatial behavior of other people. This suggests that social expectations affect distancing behavior in populated contexts. The results have implications for both physical and virtual contexts. In the physical world, settings with large numbers of people should actively manage occupancy rates to ensure the safety of attendees. While crowded scenes in mixed reality do not risk the physical safety of users, they can stimulate feelings of discomfort, both due to PS violations and spatially incompatible interaction modalities. Therefore, social spatial computing applications should consider human spatial behavior tendencies in order to deliver a comfortable user experience.
  • Item
    Human-AI Integration in Long-Established Organizations
    (2024-03-13) Pöyhönen, Marko; Marttila, Heikki; Perustieteiden korkeakoulu; Ylitalo, Jari
    Recent advances in the field of generative AI have demonstrated the machines’ capability to produce outputs that resemble human intelligence. Therefore, human-AI integration is expected to increase both employee productivity and creativity. However, businesses must consider multitude of technological, ethical, psychological, and organizational aspects to fully leverage the potential of AI. This thesis exposes the critical factors for successful human-AI integration in long-established organizations. Moreover, this research illustrates the need for organizations to strategize for changes in employee roles and to match the emerging skill demand. This research adopts a case study approach in a material manufacturing company that is part of a large multinational corporation. Furthermore, the research is based on abductive reasoning by constantly comparing the existent literature, qualitative data obtained from interviews, observations collected from autoethnographic study process, and data from secondary qualitative analyses of previous interviews. These choices in research methodology complement each other and introduce a feasible combination to study a phenomenon that is in high gear. This study reviews and synthesizes the literature on AI, related organizing topics, and its role in digital transformations. Main critical factors for successfully integrating AI systems and employees suggested in literature were data, IT infrastructure, talent, leadership, coordination, risk-taking, and change management. The thesis also explores the implications of human-AI integration on employee roles and skill requirements and suggests that AI can augment human intelligence and creativity, but also poses challenges for ethicality, fairness, trust, and responsibility. Finally, this thesis develops a novel framework for human-AI integration based on the findings from empirical data and thus provides practical guidance for managers to plan their AI development operations and furthermore enable sustainable development of the workforce. Namely, the rigorous analysis reveals that strong digital leadership, enhancement of AI literacy, and future of work dialogue enable successful human-AI integration in strong connection with self-efficacy.
  • Item
    Designing a mobile electronic health record application to support primary healthcare physicians’ work
    (2024-02-19) Honkanen, Pauliina; Jaakola, Markus; Perustieteiden korkeakoulu; Kujala, Sari
    New electronic services are continuously developed in the healthcare sector, for both patients and professionals. This thesis studies primary healthcare physicians' experiences of mobile electronic health records (EHRs) and their needs regarding them. The goal was to find out how mobile EHRs could support clinical work. An EHR is a collection of a patient's health information. Here, a mobile EHR refers to a system designed for managing patient data on a mobile device. Mobile EHRs have been found to increase the efficiency of physicians' work, especially in a hospital setting, where physicians are often required to switch locations, for example, during ward rounds. There are also some situations in the primary healthcare setting where a mobile EHR could streamline physicians' work. This thesis used a design science research methodology to produce a design artefact - a prototype to complement ERA Mobile, a web application by Atostek Oy - to address identified user needs. User needs were studied and usability feedback was collected by interviewing physicians. This study discovered central challenges in mobile EHR use, such as cumbersome text entry and the small screen that restricts access to data, especially when viewing a complex or extensive patient record. Usability improvements were proposed , e.g., to ease text-entry on a mobile EHR. Checking laboratory test results, prescribing, accessing and entering text entries to the patient record, among others, were seen as especially useful features for primary healthcare physicians. The participants evaluated that the prototype improved the usability of ERA Mobile. In primary healthcare, the proposed solution would be especially suitable for physicians conducting house visits and physicians with flexible working hours in, e.g., the private sector. The findings can be used for developing ERA Mobile and other mobile EHR systems, especially for Finnish healthcare. In future research, it is important to validate the findings by testing the usability of these solutions on a realistic context, and to evaluate the effects of mobile EHRs to the quality of patient care and physicians' cognitive work ergonomics.
  • Item
    Predicting Tumor Genome Evolution in Colorectal Cancer with Neural Networks
    (2024-03-11) Järvinen, Timo; Kankainen, Matti; Perustieteiden korkeakoulu; Lähdesmäki, Harri
    Human genetic information, encoded in DNA, dictates crucial processes such as the synthesis of RNA and proteins. Mutations, influenced by factors like neighboring bases and chromatin structure, can arise within DNA, thereby contributing to diseases such as cancer and impacting therapeutic approaches like immunotherapy. Consequently, identifying and predicting genetic mutations holds a great importance. Deep learning, a powerful subset of machine learning, can reveal complex data patterns, making it particularly suited for deciphering the relationships between genetic sequences and likelihood of emergence of mutational events. This thesis explores the application of deep learning techniques to predict mutations genome-wide, using DNA sequences and the mutation burden. Data were retrieved from a colorectal cancer patient, featuring mutations at known stages of disease progression. First, I trained two base models: a multilayer perceptron using individual DNA bases with mutational data, and a convolutional neural network model using 33-base long DNA sequences without any other biological data. To improve performance, I combined both the full 33-base long DNA sequences and accompanying mutation burden, and trained convolutional, recurrent, and transformer neural networks. Then, new models were trained with data from different tumor evolution stages to test generalization for personalized modeling. All models were tested using data obtained from later stages of tumor evolution than the data used for training and validation. Results showed that the transformer architecture achieved the best accuracies with the main dataset, while other models performed better in other timepoints. The results suggested that deep learning models could effectively capture tumor evolution, with both DNA sequences and simplified mutation burden contributing to predictions. However, accuracies remained modest, and future improvements could involve gathering more test and training data, combining data from all the patients, or adding mutation burden information like detailed epigenetic data. Starting with the fundamental concepts of protein synthesis, this thesis shows the significance of understanding the role of genetic mutations in cells and disease development. It introduces deep learning basics, various neural networks, and performance evaluation methods. Relevant previous research is discussed, leading to the introduction of new models and their performance, culminating in a discussion of the thesis research and its future directions.
  • Item
    Software Engineering Methods for Cross-Team Projects with Multi-Project Teams
    (2024-03-11) Karlsson, Robin; Ilvonen, Sami; Perustieteiden korkeakoulu; Haaranen, Lassi
    Many software engineering methods exist for different types of projects and team structures. However, in organizations where cross-team projects are common, and each person in a team is involved in multiple projects, existing software engineering methods are difficult to apply. Projects with several teams, services, and technologies often also have a complex software architecture, which can be challenging to manage and coordinate. The goal of the thesis was to find software engineering frameworks or practices that can be used for projects at CSC - IT Center for Science Ltd. In the case project in this thesis, a new feature was implemented for the web interface for the LUMI supercomputer. Implementing the feature required changes in the services of three different teams with a wide variety of technologies used. Canonical Action Research was used to identify issues in the existing processes, test potential solutions in the case project, and evaluate the result. In the literature review, no existing software engineering frameworks that directly could be applied in the case organization were found. Instead, certain software engineering practices were selected from commonly used software engineering frameworks and previous research. The software engineering practices evaluated focused mainly on improving communication and coordination between the teams. The findings of the thesis, based on the case project, indicate that communication practices are important in software engineering. Good communication channels were beneficial for synchronizing activities and progress within the project. For communicating software architecture, diagrams made discussing technical details about the services and their interactions significantly easier. A coordinator role in the team was used to better help synchronize knowledge and activities between the teams.
  • Item
    Inclusive Strategy Formulation Process in Public Healthcare Organizations of Finland: A Multiple Case Study on the Inaugural Strategy Making of Wellbeing Service Counties
    (2024-03-13) Telov, Timofei; Tolkki, Olli; Perustieteiden korkeakoulu; Vuori, Timo
    Healthcare systems around the world are facing significant transformations due to challenges like population aging, leading them to pursue centralization and strategic leadership. The same applies to the Finnish healthcare system, which was recently subjected to the largest governance reform in Finland’s history. Reform transferred organizational responsibility for the services to the newly established wellbeing service counties (WBC). It is in this setting of inaugural strategy formulation of WBCs that this master’s thesis aims to research and document 4 different strategy formulation processes. The goal of the thesis is to provide both practitioners and researchers with valuable insights into the processes, with special emphasis being given to the topic of participation. The research is conducted as a qualitative multiple-case study using semi-structured interviews as the main data-gathering method. The research starts with the literature review, which provides a comprehensive outlook on the topics of strategy formulation and participation in strategy-making. It is followed by the empirical part of the study, the results of which are presented in the Findings section of this thesis. The empirical part includes both within-case and cross-case analyses of the strategy formulation processes of case WBCs. The strategy formulation process of each WBC is described, summarized, and visualized, after which five key findings of the cross-case analysis are presented and discussed with reflection upon literature. To summarize the findings, the approaches to strategy formulation varied, but it was determined that broad participation was an essential and characterizing factor of the strategy formulation process of the WBCs. The broader participation was perceived to lead to the benefits of increased commitment and improved input generation to the strategy process. However, the research also identified some possible negative side-effects to the wider participation, such as over-participation, the low actualized value of the benefits, the overwhelmingness of the strategy team, and the inability to narrow down. The research also determined that lack of initial agreement about the strategy formulation effort can negatively impact the strategy formulation process and will to include multitude of views into the strategy makes strategy work and attempt to include all of the sums of compromises into the strategy.
  • Item
    Compute and Memory Efficient Neural Radiance Field Optimization With Hard Sample Mining
    (2024-03-11) Korhonen, Juuso; Tavakoli, Hamed; Perustieteiden korkeakoulu; Kannala, Juho
    Training Neural Radiance Fields (NeRF) demands substantial computational resources. Recent advancements, including parameterized input encoding, specialized hardware algorithms, and efficient ray sampling have managed to make the training process faster. However, NeRF training still requires lengthy training times and high GPU memory usage - ruling out lower-end devices with limited compute capability. Random ray selection is the prevalent practice at the core of NeRF methods. Random sampling is straightforward, but inefficient because most of the produced 3D point samples induce a zero gradient during training, but still form a majority of the computation time and memory usage. To enhance efficiency, we propose a hard sample mining strategy. We take advantage of the fact that backward pass takes roughly twice the compute time as forward pass. The relatively low cost forward pass is used to identify a subset of hard samples - in this case, subset of 3D pointsamples that induce a large update gradient for the network. For this subset, we then do a second forward pass building the computational graph and update the model. Building the computational graph and executing the expensive backward pass only for the subset reduces overall iteration time and memory requirements drastically. Experiments on real-world scenes with state-of-the-art NeRF model, Instant-NGP, validate the significant improvements our hard sample mining mechanism offers in terms of improved view-synthesis quality with reduced training time and memory budget. Furthermore, we expect that it can be seamlessly integrated as an enhancement for various NeRF methods.
  • Item
    Input Data Processing Strategies for Software Reliability Growth Models
    (2024-03-11) Valtonen, Valtteri; Chren, Stanislav; Perustieteiden korkeakoulu; Fagerholm, Fabian
    Software reliability growth models produce important software reliability information. They are typically nonlinear regression models that are fit to time - cumulative issue report amount data. A processing phase converts raw data into this kind of fitting data. The processing phase has received little attention in recent reliability model literature. Thus this work aims to gather information on recently used reliability growth model input data processing strategies, discover their effects on model performance, and finally guide the strategy usage. Recent scientific literature was searched for input data processing strategies. Strategy effect information was produced empirically. A software project problem report data set was collected and input data processing strategies were applied to it. Log-logistic, Yamada-Raleigh, and Weibull models were fit to the produced data, and their goodness of fit and predictive accuracy were measured. Finally, these performance metrics were compared and Spearman correlation analyzed. Guidelines were derived from the empirical results. A total of eighteen reliability model input data processing strategies were found in recent literature. They are built out of filtration, transformation, grouping, and other types of processing actions. Some actions are common between strategies. Seven strategies were applied to 85 software projects using an automated analysis tool called STRAIT. The strategy effect varied between software projects. For some projects, results improved while for others they degraded. For some projects this effect was extreme. The project result distribution improved for strategies that modified the analysis time frame to be shorter. Removing open issue reports made a reliability growth trend visible for several projects. The software project properties of project size in kilobytes, contributor amount, development day amount, and initial and processed issue amounts typically have negligible to low Spearman correlations with the model performance results and their changes. Based on the results, it is beneficial to include many data processing strategy options in reliability analysis tools, as there is no strategy that would only improve model results for all projects, and it is difficult to predict results based on project properties. It also appears to be good to remove open problem reports from the report data set and limit analysis to a single project phase instead of the full project period.
  • Item
    Geometry of Numbers and Exterior Algebras --- Towards Bombieri--Vaaler's Version of Siegel's Lemma
    (2024-03-12) Rai, Mehr; Matala-aho, Tapani; Perustieteiden korkeakoulu; Hollanti, Camilla
    This thesis provides a study and exploration of the various tools needed to prove Bombieri--Vaaler's version of Siegel's lemma, which focuses on finding integer solutions to a system of linear equations that satisfy a certain bound. The thesis progresses through essential concepts, including lattice packings, Minkowski's convex body theorems, exterior algebras, determinants, rational subspaces, and heights of rational subspaces, while presenting some important examples and results illustrating these concepts. The final theorem of Bombieri--Vaaler's version of Siegel's lemma is an important tool used in transcendental number theory and Diophantine analysis.