Property:Abstract

From Navigators

Jump to: navigation, search

This is a property of type Text.


Pages using the property "Abstract"

Showing 25 pages using this property.

(previous 25) (next 25)

B

BrunoLourenco Tese +Nowadays integrating applications agility and security is an extremely challenging process. There is the notion that security is a heavy process, requiring knowledge and consuming time of the development teams. On the other hand, the acquisition of Web Applications (Web Apps) is often achieved through contracted services because companies do not have the necessary software developers. Taking this fact into account, the risk of obtaining a product implemented by poorly qualified developers is a reality. The main objective of this thesis is to propose a solution and develop a tool that will detect some forms of Injection Attacks (IA) or Cross-Site Request Forgery (CSRF) attacks in Web Apps. The latter is due to the fact that Web Apps sometimes employ Cross-Origin Resource Sharing (CORS). Some statistics demonstrate that these attacks are some of the most common security risks in Web Apps. IA is a class of attacks that relies on inputting data into a Web App to make it execute or interpret malicious information unexpectedly. Examples of attacks in this class include SQL Injection (SQLi), Header Injection, Log Injection, and Full Path Disclosure. CORS is used by browsers to allow controlled access to resources located outside a given domain. It extends and adds flexibility to the Same Origin Policy (SOP). However, this mechanism also offers the potential for Cross-Domain based attacks if a site’s CORS policy is misconfigured. CORS is not intended to be a protection against Cross-Request attacks like the CSRF. The developed tool, called VuDRuCA, allows the detection of vulnerabilities associated with IA and CORS in Web Apps. It runs on a web server, providing this service to users on the internet, allowing them to analyse external and internal links of a particular Web App. For the external links, it will detect evidence of IA, assigning a benign or a malign classification to the identified external links. For internal links, there is a check for Cross-Origin calls, specifically CORS. VuDRuCA uses crawling techniques to navigate through the pages of the Web App and obtain the desired information. It also uses the Virus Total API, which is a free online service that parses URLs, enabling the discovery of malicious content detectable by antivirus and website scanners. As a backend, it uses a relational database to store the collected data so that it can be retrieved and analysed, reporting the presence of security indicators.
BrunoVavalaTese +The Cloud Computing model has incentivized companies to outsource services to third-party providers. Service owners can use third-party computational, storage and network resources while avoiding the cost of acquiring an IT infrastructure. However, they have to rely on the trustworthiness of the third-party providers, who ultimately need to guarantee that the services run as intended. The fundamental security challenge is how to empower companies that own and outsource such services, or clients using them, to check service execution on the remote cloud platform. A promising approach is based on hardware-enforced isolation and attestation of the service execution. Assuming that hardware attacks are infeasible, this protects the service from other malicious software or untrusted system administrators. Also, it allows clients to check that the results were produced as intended. While this paradigm is well known, previous work does not scale with large code and data sizes, lacks generality both with respect to hardware (e.g., either uses Trusted Platform Modules, TPMs, or Intel SGX) and software (e.g., only supports MapReduce applications), and makes undesirable security tradeoffs (e.g., resorts to a large Trusted Computing base, or TCB, to run unmodified services, or a small TCB but with limited functionality). This thesis shows how to secure the execution of large-scale services efficiently and without these compromises. From the perspective of a client that sends a request and receives a response, trust can be established by verifying a small proof of correct execution that is attached to the result. On the remote provider’s platform, a small trusted computing base enables the secure execution of generic services composed of a large source code base and/or working on large data sets, using an abstraction layer that is implementable on diverse trusted hardware architectures. Our small TCB implements three orthogonal techniques that are the core contributions of this thesis. The first one targets the identification (and the execution) of only the part of code that is necessary to fulfill a client’s request. This allows an increase both in security and efficiency by leaving any code that is not required to run the service outside the execution environment. The second contribution enables terabyte-scale data processing by means of a secure in-memory data handling mechanism. This allows a service to retrieve data that is validated on access and before use. Notably, data I/O is performed using virtual memory mechanisms that do not require any system call from the trusted execution environment, thereby reducing the attack surface. The third contribution is a novel fully-passive secure replication scheme that is tolerant to software attacks. Fault-tolerance delivers availability guarantees to clients, while passive replication allows for computationally efficient processing. Interestingly, all of our techniques are based on the same abstraction layer of the trusted hardware. In addition, our implementation and experimental evaluation demonstrate the practicality of these approaches.

C

CANAMP +Fault-tolerant distributed systems based on field-buses may take advantage from reliable and atomic broadcast. There is a current belief that CAN native mechanisms provide atomic broadcast. In this paper, we dismiss this misconception, explaining how network errors may lead to: inconsistent message delivery; generation of message duplicates. These errors may occur when faults hit the last two bits of the end of frame delimiter. Although rare, its influence cannot be ignored, for highly fault-tolerant systems. Finally, we give a protocol suite that handles the problem e effectively.
CAlmeidacic2001 +The widespread use of computers and communication networks creates a distributed computing environment where there is the demand to build applications with every increasing requirements in what concerns dependability and real-time characteristics. However, this situation induces requirements on the communication subsystems that cannot be easily satisfied by most existing infrastructures without adaptations. These systems are not usually fully synchronous, presenting a high variability in response time that makes it difficult to achieve real-time operation. To improve this situation, we propose the use of the quasi-synchronous approach, where we use a small synchronous part of the system to control and validate the other parts. In the context of this paper, our target communication infrastructure is the ISO 8802/3 LAN (Ethernet), which is a low cost network with a large installed base. Being able to improve the real-time characteristics of such environment offers a very cost-effective solution to a large class of applications. In order to be able to build our proposed architecture in this setting, we introduce some mechanisms to enforce the real-time characteristics of the access control layer.
CODASPY 2016 +After more than a decade of research, web application security continues to be a challenge and the backend database the most appetizing target. The paper proposes preventing injection attacks against the database management system (DBMS) behind web applications by embedding protections in the DBMS itself. The motivation is twofold. First, the approach of embedding protections in operating systems and applications running on top of them has been effective to protect these applications. Second, there is a semantic mismatch between how SQL queries are believed to be executed by the DBMS and how they are actually executed, leading to subtle vulnerabilities in protection mechanisms. The approach – SEPTIC – was implemented in MySQL and evaluated experimentally with web applications written in PHP and Java/Spring. In the evaluation SEPTIC has shown neither false negatives nor false positives, on the contrary of alternative approaches, causing also a low performance overhead in the order of 2.2%.
Caldeira2015Extreme +Wireless sensor and actuator networks are now ubiquitous being used in a continuously growing number of application settings. Many of these applications, such as those commonly found in avionics and aerospace, have the ability to host, in the same computing platform, applications with different levels of criticality (or importance), i.e. mixed-critical, where real-time and dependability requirements are a must. Wireless sensor and actuator networks (WSANs) are paving their way in these extreme environments and application domains. However, one key point is that WSANs are extremely susceptible to communication errors induced by electromagnetic interferences. Furthermore, there is a general lack of knowledge of such error patterns as well as no open tools enabling its capture. This paper presents a state of the art solution for one-hop assessment of WSANs in the presence of errors. The solution includes devices and comprehensive methods to monitor the real-time behaviour of the network, to emulate accidental errors and to perform intentional attacks. This allows to study the error, timing and performance characteristics of WSANs, thus contributing to an accurate characterization of the behaviour of network-based protocols, in several application domains, including extreme environments, opening room for system verification/validation, as well as qualification and certification. A prototype of the assessment suite based on the IEEE 802.15.4 standard is presented, along with a set of simple, yet representative, use cases.
Canclock +This paper presents a new fault-tolerant clock synchronization algorithm designed for the Controller Area Network (CAN). The algorithm provides all correct processes of the system with a global timebase, despite the occurrence of faults in the network or in a minority of processes. Such global time-frame is a requirement of many distributed real- time control systems. Designing protocols for CAN is justified by the increasing use of this network in industrial automation applications. CAN owns a number of unique properties that can be used to improve the precision and performance of a clock synchronization algorithm. Unfortunately, some of its features also make the implementation of a fault-tolerant clock synchronization service a non-trivial task. Our algorithm addresses both the positive and the negative aspects of CAN.
Carraca2014INForum +Time- and Space-Partitioned systems are a current trend in aerospace systems and in autonomous vehicles in general. Such systems employ a partitioned environment through separation of applications in logical containers called partitions. Time and Space Partitioning (TSP) ensures that partitions do not mutually interfere in terms of fulfilment of realtime and addressing space encapsulation requirements. In this paper we present an architecture for future TSP systems and its extension of concerns into the security domain. We will describe the security components that make this architecture well-suitable for the construction of systems with Multiple Independent Levels of Safety and Security (MILS).
Casimiro-2017-ADA-Europe +Advances in sensor, microprocessor and communication technologies have been fostering new applications of cyber-physical systems, often involving complex interactions between distributed autonomous components and the operation in harsh or uncertain contexts. This has led to new concerns regarding performance, safety and security, while ensuring timeliness requirements are met. To conciliate uncertainty with the required predictability, hybrid system architectures have been proposed, which separate the system in two parts: one that behaves in a best-effort way, depending on the context, and another that behaves as predictably as needed, providing critical services for a safe and secure operation. In this paper we address the problem of verifying the correct provisioning of critical functions at runtime in such hybrid architectures. We consider, in particular, the KARYON hybrid architecture and its Safety Kernel. We also consider a hardware-based non-intrusive runtime verification approach, describing how it is applied to verify Safety Kernel software functions. Finally, we experimentally evaluate the performance of two distinct Safety Kernel implementations and discuss the feasibility issues to incorporate non-intrusive runtime verification.
Casimiro09a +Building distributed embedded systems in wireless and mobile environments is more challenging than if fixed network infrastructures can be used. One of the main issues is the increased uncertainty and lack of reliability caused by interferences and fading in the communication, dynamic topologies, and so on. When predictability is an important requirement, then the uncertainties created by wireless networks become a major concern. The problem may be even more stringent if some safety critical requirements are also involved. In this paper we discuss the use of hybrid models and architectural hybridization as one of the possible alternatives to deal with the intrinsic uncertainties of wireless and mobile environments in the design of distributed embedded systems. In particular, we consider the case of safety-critical applications in the automotive domain, which must always operate correctly in spite of the existing uncertainties. We provide the guidelines and a generic architecture for the development of these applications in the considered hybrid systems. We also refer to interface issues and describe a programming model that is hybridization-aware. Finally, we illustrate the ideas and the approach presented in the paper using a practical application example.
Casimiro11a +In asynchronous systems subject to process and network failures, distributed protocols often use more or less explicit timeouts to achieve progress. Since safety properties are guaranteed independently of the specific timeout value, timeout selection tends to be seen as an implementation detail. However, when network delays are unstable and susceptible to network contention, such as in wireless environments, it becomes important to dynamically adapt timeout values in order to address performance concerns. In this paper we discuss the problem of transforming static timeout-based protocols into dynamic ones, which can autonomically and dynamically select timeout values for improved performance. We propose a methodological approach and we present an example that illustrates how the methodology applies in practice.
Casimiro2010ADS +As software systems become increasingly ubiquitous, issues of dependability become ever more crucial. Given that solutions to these issues must be considered from the very beginning of the design process, it is clear that dependability and security have to be addressed at the architectural level. This book, as well as its six predecessors, was born of an effort to bring together the research communities of software architectures, dependability, and security. This state-of-the-art survey contains expanded, peer-reviewed papers based on selected contributions from the Workshop on Architecting Dependable Systems (WADS 2009), held at the International Conference on Dependable Systems and Networks (DSN 2009), as well as a number of invited papers written by renowned experts in the area. The 13 papers are organized in topical sections on: mobile and ubiquitous systems, architecting systems, fault management, and experience and vision.
Casimiro2012DCDV +Cloud infrastructures play an increasingly important role for telecom operators, because they enable internal consolidation of resources with the corresponding savings in hardware and management costs. However, this same consolidation exposes core services of the infrastructure to very disruptive attacks. This is the case of monitoring, which needs to be dependable and secure to ensure proper operation of large datacenters and cloud infrastructures. We argue that currently existing centralized monitoring approaches (e.g., relying on a single solution provider, using single point of failure components) represent a huge risk, because a single vulnerability may compromise the entire monitoring infrastructure. In this paper, we describe the TRONE approach to trustworthy monitoring, which relies on multiple components to achieve increased levels of reliance on the monitoring data and hence increased trustworthiness. In particular, we focus on the TRONE framework for event dissemination, on security-oriented diagnosis based on collected events and on fast network adaptation in critical situations based on multi-homing application support. To validate our work, we will deploy and demonstrate our solutions in a live environment provided by Portugal Telecom.
Casimiro2012SSS +KARYON, a kernel-based architecture for safety-critical control, is a European project that proposes a new perspective to improve performance of smart vehicle coordination focusing on Advanced Driver Assistance Systems (ADASs) and Unmanned Aerial Systems (UAS). The key objective is to provide system solutions for predictable and safe coordination of smart vehicles that autonomously cooperate and interact in an open and inherently uncertain environment. Currently, these systems are not allowed to operate on the public roads or in the air space, as the risk of causing severe damage cannot be excluded with sufficient certainty. The impact of the project is two-fold; it will provide improved vehicle density without driver involvement and increased traffic throughput to maintain mobility without a need to build new traffic infrastructures. The results will improve interaction in cooperation scenarios while preserving safety and assessing it according to standards. The prospective project results include self-stabilizing algorithms for vehicle coordination, communication and synchronization. In addition, we aim at showing that the safety kernel can be designed to be a self-stabilizing one.
Casimiro2013karyon +KARYON, a kernel-based architecture for safety-critical control, is a European project that proposes a new perspective to improve performance of smart vehicle coordination. The key objective of KARYON is to provide system solutions for predictable and safe coordination of smart vehicles that autonomously cooperate and interact in an open and inherently uncertain environment. One of the main challenges is to ensure high performance levels of vehicular functionality in the presence of uncertainties and failures. This paper describes some of the steps being taken in KARYON to address this challenge, from the definition of a suitable architectural pattern to the development of proof-of-concept prototypes intended to show the applicability of the KARYON solutions. The project proposes a safety architecture that exploits the concept of architectural hybridization to define systems in which a small local safety kernel can be built for guaranteeing functional safety along a set of safety rules. KARYON is also developing a fault model and fault semantics for distributed, continuous-valued sensor systems, which allows abstracting specific sensor faults and facilitates the definition of safety rules in terms of quality of perception. Solutions for improved communication predictability are proposed, ranging from network inaccessibility control at lower communication levels to protocols for assessment of cooperation state at the process level. KARYON contributions include improved simulation and fault-injection tools for evaluating safety assurance according to the ISO 26262 safety standard. The results will be assessed using selected use cases in the automotive and avionic domains.
Casimiro2014ADSN +Cooperative vehicular systems base their coordination on inherently uncertain inter-vehicle communications. If not conveniently managed, this uncertainty can ether lead to inefficient coordination solutions or to optimistic but unsafe ones. We consider that cooperative functions can be executed with several service levels and we use the system architectural concept of safety kernel for managing the service level in order to achieve the best possible performance while keeping the system safe. We use the Gulliver test-bed for demonstrating the safety kernel concept by means of a pilot system implementation on scaled vehicles with sensors and communication capabilities. The demonstrated architecture incorporates: (1) a local dynamic map (LDM) that uses local and remote sensory information for calculating the location of nearby objects, (2) a safety kernel to manage service levels, (3) a cooperative level of service evaluator that allows vehicles to reach agreement on a common service level and, finally, (4) a driver manager that executes in accordance to the cooperative level of service when determining how to calculate the trajectory. This paper explains how the different components considered in the architectural concept operate, and shows how it is possible to use (similar to existing) trajectory planning algorithms when implementing the concept.
Casimiro2014sies-invited +Future vehicular systems will be able to cooperate in order to perform many functions in a more effective and efficient way. However, achieving predictable and safe coordination of vehicles that autonomously cooperate in open and uncertain environments is a challenging task. Traditional solutions for achieving safety either impose restrictions on performance or require costly resources to deal with the worst case situations. In this paper, we describe a generic architectural pattern that addresses this problem. We consider that cooperative functions can be executed with multiple levels of service, and we rely on a safety kernel to manage the service level in run-time. A set of safety rules defined in design-time determine conditions under which the cooperative function can be performed safely in each level of service. The paper provides details of our implementation of this safety kernel, covering both hardware and software aspects. It also presents an example application of the proposed solutions in the development of a demonstrator using scaled vehicles.
Casimiro2019SAFECOMP-FA +Continuous monitoring of aquatic environments using water sensors is important for several applications related to aquaculture and/or water resources management, as well as for recreational activities. Since sensors are constantly being subjected to potentially strong currents and debris accumulation, and the communication between sensors may be affected by waves and electromagnetic interference, operating sensors in the water environment presents several challenges to data quality assurance and to dependable monitoring. Thus, it is funda-mental to address these challenges in order to avoid false alarms or ignoring relevant events. In this paper we present the AQUAMON project, whose objective is to develop a dependable platform based on WSNs for monitoring in aquatic environments. The project addresses data communication and data quality problems, by per-forming comparative studies of available wireless technologies with respect to aspects with impact on communication quality and deployment cost and proposing new data processing approaches to detect sensor and network failures affecting data quality and to mitigate the effects of these failures.
Casimiro2019SRDS-FA +The vision of automated driving promises to have safer and more cost-efficient transport systems. Automated driving systems have to demonstrate high levels of dependability and affordability. Recent advances of new communication technologies, e.g., 5G, allow significant cost reduction of timely shared sensory information. However, the design of fault-tolerant automated driving systems remains an open challenge. This work considers the design of automated driving systems through the lenses of self-stabilization — a very strong notion of fault-tolerance. Our self-stabilizing algorithms guarantee, within a bounded period, recovery from a broad fault model and arbitrary state corruption. After this recovery period, our algorithms provide safe maneuver execution despite the presence of failures, such as unbounded periods of packet loss and timing failures as well as inaccurate sensory information and malicious behavior. We evaluate the proposed algorithms through a rigorous correctness proof and a worst-case analysis as well as a prototype that focuses on an intersection crossing protocol. We validate our prototype via computer simulations and a testbed implementation. Our preliminary results show a reduction in the number of vehicle collisions and dangerous situations.
Casimiro2020TIE +Industrial control systems (ICS) include networked control systems (NCS), which use Real-Time Ethernet (RTE) protocols since many years, well before the Time Sensitive Networking (TSN) initiative debut. Today, Ethernet based control systems are used all across Industry 4.0, including in critical applications, allowing for straight integration with IT layers. Even if it is known that current RTE protocols do not have strong authentication or ciphering options, it is still very challenging to perform undetected cyber-attacks to these protocols while the NSC is in operation, in particular because such attacks must comply with very strict and small temporal constraints. In this paper, a model based attack is proposed for service degradation of NCS. The attack is carried out in real-time and it can remain undetected for the entire plant life. The attack can be applied to any RTE protocols and, without loss of generality, a detailed analysis of stealth techniques is provided for a specific real use case based on PROFINET. The experimental results demonstrate the feasibility of the proposed attack and its high effectiveness. The paper also points out some possible future investigation directions in order to mitigate the attack.
Cheapcis09 +Today’s critical infrastructures like the power grid are essentially physical processes controlled by computers connected by networks. They are usually as vulnerable as any other interconnected computer system, but their failure has a high socio-economic impact. The report describes a new construct for the protection of these infrastructures, based on distributed algorithms and mechanisms implemented between a set of devices called CIS. CIS collectively ensure that incoming/outgoing traffic satisfies the security policy of an organization facing accidents and attacks. However, they are not simple firewalls but distributed protection devices based on a sophisticated access control model and designed with intrusion-tolerant capabilities. The report discusses the rationale behind the use of CIS to improve the resilience of critical infrastructures, and it describes and evaluates two CIS implementations, one using physical replicas, and another using virtual machine (VM) based replicas. Our intrusion-tolerant solution is cheap in four different ways: it uses less replicas than other intrusion-tolerant services; it does not requires expensive consensus protocols; the performance overhead is minimal; and it can be deployed in a single physical machine through the use of VM technology.
ClaudioMartins Tese +Today’s threats use multiple means of propagation, such as social engineering, email, and application vulnerabilities, and often operate in different phases, such as single device compromise, network lateral movement and data exfiltration. These complex threats rely on well-advanced tactics for appearing unknown to traditional security defences. One type that had a major impact in the rise of cybercrime are the advanced persistent threats (APTs), which have clear objectives, are highly organized and wellresourced and tend to perform long term stealthy campaigns with repeated attempts. As organizations realize that attacks are increasing in size and complexity, threat intelligence (TI) is growing in popularity and use amongst them. This trend followed the evolution of the APTs as they require a different level of response that is more specific to the organization. TI can be obtained via many formats, being open source intelligence (OSINT) one of the most common; and using threat intelligence platforms (TIPs)that aid organization consuming, producing and sharing TI. TIPs have multiple advantages that enable organisations to easily bootstrap the core processes of collecting, normalising, enriching, correlating, analysing, disseminating and sharing of threat related information. However, current TIPs have some limitations that prevents theirs mass adoption. This dissertation proposes a solution to some of these limitations related with threat knowledge management, limited technology enablement in threat triage, high volume of shared threat information, data quality and limited advanced analytics capabilities and tasks automation. Overall, our solution improves the quality of TI by classifying it accordingly a common taxonomy, removing the information with low value, enriching it with valuable information from OSINT sources, and aggregating it into clusters of events with similar information. This dissertation offers a complete data analysis of three OSINT feeds and the results that made us to design our solution, a detailed description of the architecture of our solution, its implementations and its validation, including the processing of events from other academic solutions.
Cogo12MSc +Obtaining correct results and behaviour on computing is a long-standing concern. Such guarantee can be obtained through fault and intrusion tolerance mechanisms, which aim to tolerate crash and arbitrary faults. Byzantine fault tolerant replication, when combined with proactive recovery techniques can tolerate any number of arbitrary faults during entire system life time. However, common vulnerabilities shared between replicas can compromise such tolerance, rendering diversity as a complementary mechanism. Diversity is a mechanism that consists in providing and combining diverse resources to increase vulnerability independence between system components. Obtaining diversity automatically is a process that can be decomposed into two phases: creation and selection. The first phase consists in providing enough diverse resources to be considered, combined and selected in second phase. In this thesis we present the DiversityAgent, a Java library for selecting cloud resources considering multiple diversity properties. Its clients only need to register available resources, then the DiversityAgent assumes the responsibility of selecting appropriate cloud computing resource combination for each server deployment. In order to design the DiversityAgent, we review taxonomies for diversity on computer systems and analyse several diversity group properties supported by cloud providers or tools, and opportunities for cloud computing players contribute with diversity management area. This document contains a review on basic fault and intrusion tolerance mechanisms, followed by an extensive diversity analysis in cloud computing environments and by the DiversityAgent development. We also present an integration of our component with two use cases foreseen by CloudFIT project, as well as present the results of correctness and performance evaluations. At the end there are the final remarks about this work and possible future work, besides three appendices regarding DiversityAgent public interfaces, usage and customising tutorials.
Cogo13fitch +Despite the fact that cloud computing offers a high degree of dynamism on resource provisioning, there is a general lack of support for managing dynamic adaptations of replicated services in the cloud, and, even when such support exists, it is focused mainly on elasticity by means of horizontal scalability. We analyse the benefits a replicated service may obtain from dynamic adaptations in the cloud and the requirements on the replication system. For example, adaptation can be done to increase and decrease the capacity of a service, move service replicas closer to their clients, obtain diversity in the replication (for resilience), recover compromised replicas, or rejuvenate ageing replicas. We introduce FITCH, a novel infrastructure to support dynamic adaptation of replicated services in cloud environments. Two prototype services validate this architecture: a crash fault-tolerant Web service and a Byzantine fault-tolerant key-value store based on state machine replication.
Cogo14biobank +BiobankCloud is an EU-funded FP7 project that will develop a cloud-computing platform as a service (PaaS) for the storage, analysis and inter-connection of biobank data. Our platform will provide security, storage, data-intensive computing tools, bioinformatics workflows, and support for allowing biobanks to share data with one another, all within the existing regulatory frameworks for the storage and usage of biobank data. In this poster we will present key ideas of BiobankCloud and how they relate with each other to compose a scalable, secure storage and analysis solution for NGS data from biobanked samples. Additionally, we will provide examples of bioinformatics workflows that can benefit from this platform.
(previous 25) (next 25)
Navigators toolbox