Browsing by Author "Främling, Kary"
Now showing 1 - 20 of 70
- Results Per Page
- Sort Options
- Agenttipohjaiset menetelmät MES-järjestelmän toteutuksessa
Perustieteiden korkeakoulu | Master's thesis(2017-12-11) Rahikainen, MikkoVisio tulevaisuuden tuotannosta on modulaariset ja tehokkaat tuotantojärjestelmät, joissa tuotteet ohjaavat omaa tuotantoprosessiaan. Näin on tarkoitus toteuttaa yksilöllisten tuotteiden tuotanto yhden tuotteen eräkoolla ilman että tarvitsee luopua massatuotannon taloudellisista eduista. Termi "Industry 4.0"kuvaa tätä suunniteltua neljättä teollista vallankumousta. Agenttipohjaisten järjestelmien toimivuus ympäristöissä joissa ei voida käyttää keskitettyä arkkitehtuuria päätöksen tekoon, tai toimivuus ympäristöissä joissa vaaditaan nopeaa vasteaikaa ja luotettavuutta, vastaavat osin näihin haasteisiin. Työssä tutkittiin MES-näkökulmasta hienokuormituksen toteutusta agenttipohjaisilla menetelmillä ja rakennettiin JADE-ohjelmistokehystä käyttäen tuotantosimulaatio kokeellista tutkimusta varten. Työssä toteutettiin kaksi erilaista tapaa tuotannon hienokuormitukseen agenttipohjaisesti. Ensimmäiseksi toteutettiin markkinamekanismiin perustuva hienokuormitustapa, jossa tilausagentit päättävät resurssiagenttien käytöstä saamiensa tarjouksien perusteella. Toiseksi toteutettiin geneettiseen algoritmiin pohjautuva tapa, jossa optimointiagentti aikatauluttaa ajettavat työt resurssiagenteille. - Älylaitteiden kytkeminen terveydenhuollon tietojärjestelmiin
Perustieteiden korkeakoulu | Master's thesis(2016-10-27) Kaisanlahti, JaakkoEsineiden internetiä voidaan hyödyntää terveydenhuollossa sairauksien hoitoon ja ennaltaehkäisyyn. Julkisella terveydenhuollolla ei kuitenkaan välttämättä ole resursseja tarjota siihen soveltuvia laitteita muille kuin hoitoa tarvitseville. Kuluttajilla olevien laitteiden hyödyntäminen mahdollistaisi perusterveiden ihmisten terveyden seurannan ja sairauksien ennaltaehkäisyn. Tässä työssä kehitettiin tapa yhdistää kuluttajatason älylaitteita terveydenhuollon tietojärjestelmiin. Joukko simuloituja laitteita tarjosi dataa O-DF-muodossa (Open Data Format). Data siirrettiin O-MI-protokollalla (Open Messaging Interface) muuntimelle, joka käänsi sen terveydenhuollossa käytettävän FHIR-standardin (Fast Healthcare Interoperability Resources) mukaiseen muotoon. Suurimmat havaitut ongelmat liittyivät O-DF-datan esitysmuodon valintaan ja datakenttien vastaavuuksien tunnistamiseen O-DF ja FHIR-muodoissa. Järjestelmän toteuttaminen tuotantokäyttöön vaatisi käytettyjen standardien yleistymistä tai laitekohtaisten datamuunnoksen tekevien väliohjelmistojen kehitystä. - Analytics for enabling data-driven facility management using building management systems data
Perustieteiden korkeakoulu | Master's thesis(2020-08-18) Rehn, JonathanFacility managers today have vast amounts of data available through Building Automation Systems. However, this data is currently primarily used through rule based automation and manual analysis, leaving room for new and more sophisticated analytical tools. This thesis explores and proposes analytical tools with the aim to make facility management more data-driven. Specifically, the maintenance, comfort and energy consumption related activities of facility management are addressed. A linear correlation model for unsupervised fault detection and predictive maintenance is proposed and implemented. The model is concluded to be more interpretable than its industrial counterparts as well as able to detect both system faults and equipment degradation. Energy consumption forecasting is concluded to be a key capability of facility managers to optimize consumption and participate in the Smart Grid. Two separate models for heating energy usage forecasting are proposed and implemented. The first is a deep neural network, the second an additive ensemble model of regression trees. The latter model is proved more accurate and substantially less resource intensive to implement, while the deep neural network is suggested to be more suitable for complex forecasting. All three models presented in this thesis are meant to provide a first step with low requirements to implement and direct benefits, ultimately encouraging more business investment into data-oriented capabilities and infrastructure. - Authentication and Access Control for Open Messaging Interface Standard
A4 Artikkeli konferenssijulkaisussa(2017-11-07) Yousefnezhad, Narges; Filippov, Roman; Javed, Asad; Buda, Andrea; Madhikermi, Manik; Främling, KaryThe number of Internet of Things (IoT) vendors is rapidly growing, providing solutions for all levels of the IoT stack. Despite the universal agreement on the need for a standardized technology stack, following the model of the world-wide-web, a large number of industry-driven domain specific standards hinder the development of a single IoT ecosystem. An attempt to solve this challenge is the introduction of O-MI (Open Messaging Interface) and O-DF (Open Data Format), two domain independent standards published by Open Group. Despite their good compatibility, they define no specific security model. This paper takes the first step of defining a security model for these standards by proposing suitable access control and authentication mechanisms that can regulate the rights of different principles and operations defined in these standards. First, a brief introduction is provided of the O-MI and O-DF standards, including a comparison with existing standards. Second, the envisioned security model is presented, together with the implementation details of the plug-in module developed for the O-MI and O-DF reference implementation. - Authentication and Authorization Modules for Open Messaging Interface (O-MI)
Perustieteiden korkeakoulu | Master's thesis(2018-10-08) Saeed, AishaWith the constant rise of new technology, developments in the fields of computer science, wireless networks, storage capabilities and sensing possibilities along with the demand for continuous connectivity have lead to the formation of the Internet of Things (IoT) concept. Today, there are numerous organizations working on the IoT technology aimed at developing smart products and services. Each company proposes its own methods directed for a particular field of industry thus, it ends up with having several protocols. This has poorly followed the concept of a unified system. The Open Group attempted to address this issue by proposing Open Messaging Interface (O-MI) and Open Data Format (O-DF) protocols and claimed O-MI to be an IoT messaging standard as that of HTTP for world-wide-web (WWW). The proposed protocols have been designed to ensure robust development, data standardization, and required security level. However, the security model needs to be upgraded with the recent security techniques. This thesis attempts to specify appropriate authentication and authorization (access control) mechanisms that manage various consumers and provide functionalities that fit into O-MI/O-DF standards. The thesis first discusses several challenges regarding IoT security and then different authentication and authorization techniques available today. It then describes in detail the design decisions and implementation technicalities of the autonomous services created for the reference implementation of O-MI and O-DF. - Automated IoT device identification based on full packet information using real-time network traffic
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä(2021-04-02) Yousefnezhad, Narges; Malhi, Avleen; Främling, KaryIn an Internet of Things (IoT) environment, a large volume of potentially confidential data might be leaked from sensors installed everywhere. To ensure the authenticity of such sensitive data, it is important to initially verify the source of data and its identity. Practically, IoT device identification is the primary step toward a secure IoT system. An appropriate device identification approach can counteract malicious activities such as sending false data that trigger irreparable security issues in vital or emergency situations. Recent research indicates that primary identity metrics such as Internet Protocol (IP) or Media Access Control (MAC) addresses are insufficient due to their instability or easy accessibility. Thus, to identify an IoT device, analysis of the header information of packets by the sensors is of imperative consideration. This paper proposes a combination of sensor measurement and statistical feature sets in addition to a header feature set using a classification-based device identification framework. Various machine Learning algorithms have been adopted to identify different combinations of these feature sets to provide enhanced security in IoT devices. The proposed method has been evaluated through normal and under-attack circumstances by collecting real-time data from IoT devices connected in a lab setting to show the system robustness. - Blockchain distributed DNS without trust: Publishing IOT device addresses and verifying data
Perustieteiden korkeakoulu | Master's thesis(2017-12-11) Rasi, JukkaBlockchain enabled distributed DNS makes possible to have a trustless system, where no participant needs to be trusted. Blockstack is such a distributed DNS that is built on top of Bitcoin’s blockchain. In this thesis I will extend this trustless feature to data sharing from an IOT device, by creating a proof of concept implementation. Cryptographically linking parts together, the trustless feature of the underlying blockchain can be preserved from the blockchain to the shared data from the device. - Blockchain-based deployment of product-centric information systems
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä(2021-02) Mattila, Juri; Seppälä, Timo; Valkama, Pellervo; Hukkinen, Taneli; Främling, Kary; Holmström, JanCollecting and utilizing product life-cycle data is both difficult and expensive for products that move between different industrial settings at various points of the product life-cycle. Product-centric approaches that present effective solutions in tightly integrated environments have been problematic to deploy across multiple industries and over longer timespans. Addressing deployment costs, incentives, and governance, this paper explores a blockchain-based approach for the deployment of product-centric information systems. Through explorative design science and systematic combining, the deployment of a permissionless blockchain system for collecting product life-cycle data is conceptualized, demonstrated, and evaluated by experts. The purpose of the blockchain-based solution is to manage product data interactions, to maintain an accurate single state of product information, and to provide an economic incentive structure for the provision and the deployment of the solution. The evaluation by knowledgeable researchers and practitioners identifies the aspects limiting blockchain-based deployment of solutions in the current industrial landscape. Combining theory and practice, the paper lays the foundation for a blockchain-based approach to product information management, placing design priority on inter-industrial and self-sustained deployment. - Building Lifecycle Management System for Enhanced Closed Loop Collaboration
A4 Artikkeli konferenssijulkaisussa(2017) Främling, Kary; Sylvain, Kubler; Buda, Andrea; Robert, Jérémy; Le Traon, YvesIn the past few years, the architecture, engineering and construction (AEC) industry has carried out efforts to develop BIM (Building Information Modelling) facilitating tools and standards for enhanced collaborative working and information sharing. Lessons learnt from other industries such as PLM (Product LifecycleManagement) – established tool in manufacturing to manage the engineering change process – revealed interesting potential to manage more efficiently the building design and construction processes. Nonetheless, one of the remaining challenges consists in closing the information loop between multiple building lifecycle phases, e.g. by capturing information from middle-of-life processes (i.e., use and maintenance) to re-use it in end-of-life processes (e.g., to guide disposal decision making). Our research addresses this lack of closed-loop system in the AEC industry by proposing an open and interoperable Web-based building lifecycle management system. This paper gives (i) an overview of the requirement engineering process that has been set up to integrate efforts, standards and directives of both the AEC and PLMindustries, and (ii) first proofs-of-concept of our system implemented on two distinct campus. - CEFIoT: A Fault-Tolerant IoT Architecture for Edge and Cloud
A4 Artikkeli konferenssijulkaisussa(2018-05-04) Javed, Asad; Heljanko, Keijo; Buda, Andrea; Främling, KaryInternet of Things (IoT), the emerging computing infrastructure that refers to the networked interconnection of physical objects, incorporates a plethora of digital systems that are being developed by means of a large number of applications. Many of these applications administer data collection on the edge and offer data storage and analytics capabilities in the cloud. This raises the following problems: (i) the processing stages in IoT applications need to have separate implementations for both the edge and the cloud, (ii) the placement of computation is inflexible with separate software stacks, as the optimal deployment decisions need to be made at runtime, and (iii) unified fault tolerance is essential in case of intermittent long-distance network connectivity problems, malicious harming of edge devices, or harsh environments. This paper proposes a novel fault-tolerant architecture CEFIoT for IoT applications by adopting state-of-the-art cloud technologies and deploying them also for edge computing. We solve the data fault tolerance issue by exploiting the Apache Kafka publish/subscribe platform as the unified high-performance data replication solution offering a common software stack for both the edge and the cloud. We also deploy Kubernetes for fault-tolerant management and the advanced functionality allowing on-the-fly automatic reconfiguration of the processing pipeline to handle both hardware and network connectivity based failures. - ciu.image: An R Package for Explaining Image Classification with Contextual Importance and Utility
A4 Artikkeli konferenssijulkaisussa(2021) Främling, Kary; Knapic̆, Samanta; Malhi, AvleenMany techniques have been proposed in recent years that attempt to explain results of image classifiers, notably for the case when the classifier is a deep neural network. This paper presents an implementation of the Contextual Importance and Utility method for explaining image classifications. It is an R package that can be used with the most usual image classification models. The paper shows results for typical benchmark images, as well as for a medical data set of gastro-enterological images. For comparison, results produced by the LIME method are included. Results show that CIU produces similar or better results than LIME with significantly shorter calculation times. However, the main purpose of this paper is to bring the existence of this package to general knowledge and use, rather than comparing with other explanation methods. - A Cloud based Remote Diagnostics service for Industrial Paper Mill
Perustieteiden korkeakoulu | Master's thesis(2020-12-14) Khan, MuhammadThis is an era where data is being generated by several devices around the clock which accumulates to petabytes. Industrial equipment are also generating telemetry data which is used to gain insights of industrial processes. Telemetry data also help perform remote diagnostics, early failure detection and incident cause discovery. This leads to development of big data processing systems which help answer how the data should be stored and processed. There are numerous existing architectures for big data processing systems, one of which is Lambda Architecture which provides a way to implement a distributed data processing system which can be customized according to the business needs. Lambda Architecture can also be complex to implement and maintain due to the combination of two processing systems in a single architecture. In this thesis, we propose and implement a robust and simplified approach for developing the data processing system using Lambda Architecture. The proposed approach strives to use minimum number of services limiting to Azure CosmosDb, Apache Spark, Azure EventHubs and Azure Functions to implement this complex architecture. We show that our approach makes the data processing system maintainable and reduces infrastructure management. - CN-waterfall: a deep convolutional neural network for multimodal physiological affect detection
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä(2022-02) Fouladgar, Nazanin; Alirezaie, Marjan; Främling, KaryAffective computing solutions, in the literature, mainly rely on machine learning methods designed to accurately detect human affective states. Nevertheless, many of the proposed methods are based on handcrafted features, requiring sufficient expert knowledge in the realm of signal processing. With the advent of deep learning methods, attention has turned toward reduced feature engineering and more end-to-end machine learning. However, most of the proposed models rely on late fusion in a multimodal context. Meanwhile, addressing interrelations between modalities for intermediate-level data representation has been largely neglected. In this paper, we propose a novel deep convolutional neural network, called CN-Waterfall, consisting of two modules: Base and General. While the Base module focuses on the low-level representation of data from each single modality, the General module provides further information, indicating relations between modalities in the intermediate- and high-level data representations. The latter module has been designed based on theoretically grounded concepts in the Explainable AI (XAI) domain, consisting of four different fusions. These fusions are mainly tailored to correlation- and non-correlation-based modalities. To validate our model, we conduct an exhaustive experiment on WESAD and MAHNOB-HCI, two publicly and academically available datasets in the context of multimodal affective computing. We demonstrate that our proposed model significantly improves the performance of physiological-based multimodal affect detection. - Comparing seven methods for state-of-health time series prediction for the lithium-ion battery packs of forklifts
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä(2021-11) Huotari, Matti; Arora, Shashank; Malhi, Avleen; Främling, KaryA key aspect for the forklifts is the state-of-health (SoH) assessment to ensure the safety and the reliability of uninterrupted power source. Forecasting the battery SoH well is imperative to enable preventive maintenance and hence to reduce the costs. This paper demonstrates the capabilities of gradient boosting regression for predicting the SoH timeseries under circumstances when there is little prior information available about the batteries. We compared the gradient boosting method with light gradient boosting, extra trees, extreme gradient boosting, random forests, long short-term memory networks and with combined convolutional neural network and long short-term memory networks methods. We used multiple predictors and lagged target signal decomposition results as additional predictors and compared the yielded prediction results with different sets of predictors for each method. For this work, we are in possession of a unique data set of 45 lithium-ion battery packs with large variation in the data. The best model that we derived was validated by a novel walk-forward algorithm that also calculates point-wise confidence intervals for the predictions; we yielded reasonable predictions and confidence intervals for the predictions. Furthermore, we verified this model against five other lithium-ion battery packs; the best model generalised to greater extent to this set of battery packs. The results about the final model suggest that we were able to enhance the results in respect to previously developed models. Moreover, we further validated the model for extracting cycle counts presented in our previous work with data from new forklifts; their battery packs completed around 3000 cycles in a 10-year service period, which corresponds to the cycle life for commercial Nickel-Cobalt-Manganese (NMC) cells. - Comparison of Contextual Importance and Utility with LIME and Shapley Values
A4 Artikkeli konferenssijulkaisussa(2021) Främling, Kary; Westberg, Marcus; Jullum, Martin; Madhikermi, Manik; Malhi, AvleenDifferent explainable AI (XAI) methods are based on different notions of ‘ground truth’. In order to trust explanations of AI systems, the ground truth has to provide fidelity towards the actual behaviour of the AI system. An explanation that has poor fidelity towards the AI system’s actual behaviour can not be trusted no matter how convincing the explanations appear to be for the users. The Contextual Importance and Utility (CIU) method differs from currently popular outcome explanation methods such as Local Interpretable Model-agnostic Explanations (LIME) and Shapley values in several ways. Notably, CIU does not build any intermediate interpretable model like LIME, and it does not make any assumption regarding linearity or additivity of the feature importance. CIU also introduces the value utility notion and a definition of feature importance that is different from LIME and Shapley values. We argue that LIME and Shapley values actually estimate ‘influence’ (rather than ‘importance’), which combines importance and utility. The paper compares the three methods in terms of validity of their ground truth assumption and fidelity towards the underlying model through a series of benchmark tasks. The results confirm that LIME results tend not to be coherent nor stable. CIU and Shapley values give rather similar results when limiting explanations to ‘influence’. However, by separating ‘importance’ and ‘utility’ elements, CIU can provide more expressive and flexible explanations than LIME and Shapley values. - A Comprehensive Security Architecture for Information Management throughout the Lifecycle of IoT Products
A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä(2023-03) Yousefnezhad, Narges; Malhi, Avleen; Keyriläinen, Tuomas; Främling, KaryThe Internet of things (IoT) is expected to have an impact on business and the world at large in a way comparable to the Internet itself. An IoT product is a physical product with an associated virtual counterpart connected to the internet with computational as well as communication capabilities. The possibility to collect information from internet-connected products and sensors gives unprecedented possibilities to improve and optimize product use and maintenance. Virtual counterpart and digital twin (DT) concepts have been proposed as a solution for providing the necessary information management throughout the whole product lifecycle, which we here call product lifecycle information management (PLIM). Security in these systems is imperative due to the multiple ways in which opponents can attack the system during the whole lifecycle of an IoT product. To address this need, the current study proposes a security architecture for the IoT, taking into particular consideration the requirements of PLIM. The security architecture has been designed for the Open Messaging Interface (O-MI) and Open Data Format (O-DF) standards for the IoT and product lifecycle management (PLM) but it is also applicable to other IoT and PLIM architectures. The proposed security architecture is capable of hindering unauthorized access to information and restricts access levels based on user roles and permissions. Based on our findings, the proposed security architecture is the first security model for PLIM to integrate and coordinate the IoT ecosystem, by dividing the security approaches into two domains: user client and product domain. The security architecture has been deployed in smart city use cases in three different European cities, Helsinki, Lyon, and Brussels, to validate the security metrics in the proposed approach. Our analysis shows that the proposed security architecture can easily integrate the security requirements of both clients and products providing solutions for them as demonstrated in the implemented use cases. - Computation Offloading and Task Scheduling among Multi-Robot Systems
Perustieteiden korkeakoulu | Master's thesis(2017-12-11) Zhu, Jixiangthat some goals that are impossible for a single robot to achieve become feasible and attainable. Developing rapidly and exploited widely, cloud further extends the resources a robot can access thereby bringing significant potential benefits to robotic and automation systems. One of the potential benefits is Computation Offloading that moves the computational heavily parts of an application onto a server to reduce the execution time. However, to enable the computation offloading, the question must be answered when, what, where and how to offload? While some offloading mechanisms proposed in the mobile computing area (i.e., smartphones, pads), the question remains not fully answered and many new challenges emerge when it comes to the robotic realm. This paper aims to apply computation offloading technology to a Multi-Robot System and investigate the performance impact it has on the processing time of robot applications. For this purpose, a computation offloading framework is proposed for an elastic computing model with the engagement of a two-tier cloud infrastructure, i.e., a public cloud infrastructure and an ad-hoc local network (fog) formed by a cluster of robots. Two scheduling algorithms: Heterogeneous-Earliest-Finish-Time (HEFT) and Critical-Path-on-a-Processor (CPOP) are implemented to schedule the offloaded tasks to available robots and servers such that the total execution time of the application is minimized. The offloading framework is implemented based on Robot Operating System (ROS) and tested through simulated applications. The results prove the feasibility of proposed offloading framework and indicate potential execution speeding up of robot applications by exploiting offloading technology. - Context-specific sampling method for contextual explanations
A4 Artikkeli konferenssijulkaisussa(2021) Madhikermi, Manik; Malhi, Avleen; Främling, KaryExplaining the result of machine learning models is an active research topic in Artificial Intelligence (AI) domain with an objective to provide mechanisms to understand and interpret the results of the underlying black-box model in a human-understandable form. With this objective, several eXplainable Artificial Intelligence (XAI) methods have been designed and developed based on varied fundamental principles. Some methods such as Local interpretable model agnostic explanations (LIME), SHAP (SHapley Additive exPlanations) are based on the surrogate model while others such as Contextual Importance and Utility (CIU) do not create or rely on the surrogate model to generate its explanation. Despite the difference in underlying principles, these methods use different sampling techniques such as uniform sampling, weighted sampling for generating explanations. CIU, which emphasizes a context-aware decision explanation, employs a uniform sampling method for the generation of representative samples. In this research, we target uniform sampling methods which generate representative samples that do not guarantee to be representative in the presence of strong non-linearities or exceptional input feature value combinations. The objective of this research is to develop a sampling method that addresses these concerns. To address this need, a new adaptive weighted sampling method has been proposed. In order to verify its efficacy in generating explanations, the proposed method has been integrated with CIU, and tested by deploying the special test case - Counterfactual, Contrastive, and Hierarchical Explanations with Contextual Importance and Utility
A4 Artikkeli konferenssijulkaisussa(2023) Främling, KaryContextual Importance and Utility (CIU) is a model-agnostic method for post-hoc explanation of prediction outcomes. In this paper we describe and show new functionality in the R implementation of CIU for tabular data. Much of that functionality is specific to CIU and goes beyond the current state of the art. - Data discovery method for Extract- Transform-Load
A4 Artikkeli konferenssijulkaisussa(2019-05-09) Madhikerrni, Manik; Främling, KaryInformation Systems (ISs) are fundamental to streamline operations and support processes of any modern enterprise. Being able to perform analytics over the data managed in various enterprise ISs is becoming increasingly important for organisational growth. Extract, Transform, and Load (ETL) are the necessary pre-processing steps of any data mining activity. Due to the complexity of modern IS, extracting data is becoming increasingly complicated and time-consuming. In order to ease the process, this paper proposes a methodology and a pilot implementation, that aims to simplify data extraction process by leveraging the end-users' knowledge and understanding of the specific IS. This paper first provides a brief introduction and the current state of the art regarding existing ETL process and techniques. Then, it explains in details the proposed methodology. Finally, test results of typical data-extraction tasks from four commercial ISs are reported.