- Home
- Conference Proceedings
- Qatar Foundation Annual Research Conference Proceedings
- Conference Proceeding
Qatar Foundation Annual Research Conference Proceedings Volume 2014 Issue 1
- Conference date: 18-19 Nov 2014
- Location: Qatar National Convention Center (QNCC), Doha, Qatar
- Volume number: 2014
- Published: 18 November 2014
361 - 380 of 480 results
-
-
On And Off-body Path Loss Model Using Planar Inverted F Antenna
Authors: Mohammad Monirujjaman Khan and Qammer Hussain AbbasiThe rapid development of biosensors and wireless communication devices brings new opportunities for Body-Centric Wireless Networks (BCWN) which has recently received increasing attention due to their promising applications in medical sensor systems and personal entertainment technologies. Body-centric wireless communications (BCWCs) is a central point in the development of fourth generation mobile communications. In body-centric wireless networks, various units/sensors are scattered on/around the human body to measure specified physiological data, as in patient monitoring for healthcare applications [1-3]. A body-worn base station will receive the medical data measured by the sensors located on/around the human body. In BCWCs, communications among on-body devices are required, as well as communications with external base stations. Antennas are the essential component for wearable devices in body-centric wireless networks and they play a vital role in optimizing the radio system performance. The human body is considered an uninviting and even hostile environment for a wireless signal. The diffraction and scattering from the body parts, in addition to the tissue losses, lead to strong attenuation and distortion of the signal [1]. In order to design power-efficient on-body and off-body communication systems, accurate understanding of the wave propagation, the radio channel characteristics and attenuation around the human body is extremely important. In the past few years, researchers have been thoroughly investigating narrow band and ultra wideband on-body radio channels. In [4], on-body radio channel characterisation was presented at ultra wideband frequencies. In body-centric wireless communications, there is a need of communications among the devices mounted on the body as well as off-body devices. In previous study, researchers have designed the antennas for on-body communications and investigated the on-body radio channel performance both in narrowband and Ultra wideband technologies. This paper presents the results of on-body and off-body path loss model using Planar Inverted F Antenna (PIFA). The antenna used in this study works at two different frequency bands as 2.45 GHz (ISM band) and 1.9 GHz (PCS band). The 2.45 GHz is used for the communication over human body surface (on-body) and 1.9 GHz is used for the communication from body mounted devices to off-body units (off-body communications). Measurement campaigns were performed in the indoor environment and anechoic chamber. A frequency-domain measurement set-up was applied. Antenna design and on and off-body path loss model results will be presented.
-
-
-
Performance Analysis Of Heat Pipe-based Photovoltaic-thermoelectric Generator (hp-pv/teg) Hybrid System
Authors: Adham Makki and Siddig OmerPhotovoltaic (PV) cells can absorb up to 80% of the incident solar radiation of the solar spectrum, however, only certain percentage of the absorbed incident energy is converted into electricity depending on the conversion efficiency of the PV cell technology used, while the remainder energy is dissipated as heat accumulating on the surface of the cells causing elevated temperatures. Temperature rise at the PV cell level is addressed as one of the most critical issues influencing the performance of the cells causing serious degradations and shortens the life-time of the PV cells, hence cooling of the PV module during operation is essential. Hybrid PV designs which are able to simultaneously generate electrical energy and utilize the waste heat have been proven to be the most promising solution. In this study, analytical investigation of a hybrid system comprising of a Heat Pipe-based Photovoltaic-Thermoelectric Generator (HP-PV/TEG) for further enhanced performance is presented. The system presented incorporates a PV panel for direct electricity generation, a heat pipe to absorb excessive heat from the PV cells and assist uniform temperature distribution on the surface of the panel, and a thermoelectric generator (TEG) to perform direct heat-to-electricity conversion. A mathematical model based on the heat transfer process within the system is developed to evaluate the cooling capability and predict the overall thermal and electrical performances of the hybrid system. Results are presented in terms electrical efficiencies of the system. It was observed that the integration of TEG modules with PV cells aid improving the performance of the PV cells through utilizing the waste-heat available, leading to higher output power. The system presented can be applied in regions with hot desert climates where electricity demand is higher than thermal energy.
-
-
-
Effective Recommendation Of Reviewers For Research Proposals
Authors: Nassma Salim Mohandes, Qutaibah Malluhi and Tamer ElsayedIn this project, we address the problem that a research funding agency may face when matching potential reviewers with submitted research proposals. A list of potential reviewers for a given proposal is typically selected manually by a small technical group of individuals in the agency. However, the manual approach can be an exhausting and challenging task, and (more importantly) might lead to ineffective selections that affect the subsequent funding decisions. This research work presents an effective automated system that recommends reviewers for proposals and helps program managers in the assignment process. This system views the CVs of the reviewers and rank them by assigning weights for each CV against the list of all the proposals. We propose an automatic method to effectively recommend (for a given research proposal) a short list of potential reviewers who demonstrate expertise in the given research field/topic. To accomplish this task, our system extracts information from the full-text of proposals and the CVs of reviewers. We discuss the proposed solution, and the experience in using the solution within the workflow of the Qatar National Research Fund (QNRF). We evaluate our system on a QNRF/NPRP dataset that includes the submitted proposals and approved list of reviewers from the first 5 cycles of the NPRP funding program. Experimental results on this dataset validate the effectiveness of the proposed approach, and show that the best performance of our system demonstrated for proposals in three research areas: natural science, engineering, and medical. The system does not perform as well for proposals in the other two domains, i.e., humanities and social sciences. Our approach performs very well in overall evaluation with 68% of relevant results, i.e., from each 10 recommendations 7 are matching perfectly. Our proposed approach is general and flexible. Variations of the approach can be used in other applications such as conference paper assignment to reviewers and teacher-course assignment. Our research demonstrates also that there are significant advantages to applying recommender system concepts to the proposal-to-reviewer assignment problem. In summary, the problem of automatic assignment of proposals to reviewers is challenging and time-consuming when it is conducted manually by the program managers. Software systems can offer automated tools that significantly facilitate the role of program managers. We follow previous approaches in treating reviewers finding system as an information retrieval task. We use the same basic tools but the goal is to find relevant people rather than searching for relevant documents. For a specific user query (proposal), the system returns a list of qualified reviewers, ranked by their relevance to the query.
-
-
-
Intelligent Active Management Of Distribution Network To Facilitate Integration Of Renewable Energy Resources
More LessHarvesting electric energy from renewable energy resources is seen as one of the solutions to secure the energy sustainability due to depleting resources of fossil fuel, the conventional resources of electric energy. Renewable energy is typically connected to conventional distribution network which were not designed to accomodate any sources of electricity reduce the security of the energy supply system. Moreover the variency of renewable resources create many operational challenges to the distribution network operator. Higher shares of distributed energy sources lead to unpredictable network flows, greater variations in voltage, and different network reactive power characteristics as already evidence in many distribution networks. Local network constraints occurs more frequently, adversely affecting the quality of supply. Yet distribution network operators are nevertheless expected to continue to operate their networks in a secure way and to provide high-quality service to their customers. Active management of distribution network may provide some answers to these problems. Indeed, distribution management will allow grids to integrate renewable energu resources efficiently by leveraging the inherent characteristics of this type of generation. The growth of renewable energy resources requires changes to how distribution networks are planned and operated. Bi-directional flows need to be taken into account: they must be monitored, simulated and managed. This paper will describe features of smart grid concept that can be employed in distribution network for active management to facilitate the integration of renewable energy resources. The concepts include coordinated voltage control, microgrid operation and intelligent reactive power management to name a few. The development of physical testbed to test these new strategies in managing distribution network will also be described. The heart of these strategies is intelligent controller which acting as energy management system. The development of this controller will also be described and its operationality will be eplained.
-
-
-
Dynamic Team Theory With Nonclassical Information Structures Of Discrete-time Stochastic Dynamic Systems
Static Team Theory is a mathematical formalism of decision problems with multiple Decision Makers (DMs) that have access to different information and aim at optimizing a common pay-off or reward functional. It is often used to formulate decentralized decision problems, in which the decision-making authority is distributed through a collection of agents or players, and the information available to the DMs to implement their actions is non-classical. Static team theory and decentralized decision making originated from the fields of management, organization behavior and government by Marschak and Radner. However, it has far reaching implications in all human activity, including science and engineering systems, that comprise of multiple components, in which information available to the decision making components is either partially communicated to each other or not communicated at all. Team theory and decentralized decision making can be used in large scale distributed systems, such as transportation systems, smart grid energy systems, social network systems, surveillance systems, communication networks, financial markets, etc. As such, these concepts are bound to play key roles in emerging cyber-physical systems and align well with ARC'14 themes on Computing and Information Technology and Energy and Environment. Since the late 1960's several attempts have been made to generalize static team theory to dynamic team theory, in order to account for decentralized decision-making taken sequentially over time. However, to this date, no mathematical framework has been introduced to deal with non-classical information structures of stochastic dynamical decision systems, much as it is successfully done over the last several decades for stochastic optimal control problems, which presuppose centralized information structures. In this presentation, we put forward and analyze two methods, which generalize static team theory to dynamic team theory, in the context of discrete-time stochastic nonlinear dynamical problems, with team strategies, based on non-classical information structures. Both approaches are based on transforming the original discrete-time stochastic dynamical decentralized decision problem to an equivalent one in which the observations and/or the unobserved state processes are independent processes, and hence the information structures available for decisions are not affected by any of the team decisions. The first method is based on deriving team optimality conditions by direct application of static team theory to the equivalent transformed team problem. The second method is based on discrete-time stochastic Pontryagin's maximum principle. The team optimality conditions are captured by a "Hamiltonian System" consisting of forward and backward discrete-time stochastic dynamical equations, and a conditional variational Hamiltonian with respect to the information structure of each team member, while all other team members hold the optimal values.
-
-
-
Cunws/rgo Based Transparent Conducting Electrodes As A Replacement Of Ito In Opto-electric Devices
Transparent electrodes that conduct electrical current and allow light to pass through are widely used as the essential component in various opto-electric devices such as light emitting diodes, solar cells, photodectectors and touch screens. Currently, Indium Tin oxide (ITO) is the best, commercially available transparent conducting electrode (TCE). However, ITO is too expensive owing high cost on indium. Furthermore ITO thin films are too brittle to be used in flexible devices. To fulfill the demand of TCEs for wide range of applications, high performance ITO alternatives are required. Herein we demonstrate an approach for the successful, solution based synthesis of high aspect ratio copper nanowires, which were later combined with reduced graphene oxide (rGO), in order to produce smooth thin film TCEs on both glass and flexible substrate. Structure and component characterization for these electrodes was carried out through Four Probe, Spectrophotometer, Scanning electron Microscope (SEM), Transmission Electron Microscope (TEM) and Atomic Field Microscopy (AFM). In addition to the morphological and electrical characterization, these samples were also tested for their durability by carrying out experiments that involved exposure to various environmental conditions and electrode bending. Our fabricated transparent electrodes exhibited high performance with a transmittance of 91.6% and a sheet resistance of 9 O/sq. Furthermore, the electrodes showed no notable loss in performance during the durability testing experiments. Such results make them as replacement for indium tin oxide as a transparent electrode and presents a great opportunity to accelerate the mass development of devices like high efficiency hybrid silicon photovoltaics via simple and rapid soluble processes.
-
-
-
Development Of A Remote Sma Experiment - A Case Study
Authors: Ning Wang, Jun Weng, Michael Ho, Xuemin Chen, Gangbing Song and Hamid ParsaeiA remote laboratory containing a specially designed experiment was built to demonstrate and visualize the characteristics of wire-shape shape memory alloys (SMAs). In particular, the unit helps the study of the hysteretic behavior of SMAs as well as how the electrical driving frequency changes the hysteresis loop. One such SMA remote experiment with a novel unified framework was constructed at the Texas A&M University at Qatar. In this project, we developed a new experiment data transaction protocol and software package used in the remote lab. It is clear that the new platform makes some improvements in traversing network firewall function and software plug-in free. In order to provide a more realistic experience to the user in conducting the remote SMA experiment, the new solution also implements a Real-Time experiment video function. Compared to the traditional remote SMA experiment that uses the LabVIEW remote panel, the new SMA remote experiment solution has three advantages. The user interface of the new remote SMA experiment is plug-in free and can run in different web browsers. The new remote lab also resolves the issue of traversing a network firewall. End users only need to access the Internet and use a web browser to operate the SMA experiment. The experiment control webpage is developed by JavaScript which is a universally used computer language. Meanwhile, any regular web browsers are able to use all the features of the remote panel without requiring any extra software plug-ins. An additional function of the new remote lab is the real-time delivery of the video transmission from the experiment, thus providing a more realistic experience for the user. This new remote SMA experiment user interface can also be used through smart phones and tablet computers. Compared to LabVIEW based experiments, the experiment data collected from the use of the novel unified framework are similar except for the amplitude of the reference signal. The amplitude can be different because they are defined by the users. The data recorded from the new remote SMA experiment GUI has fewer samples per second comparing to that in remote SMA experiment with LabVIEW. The data transmission in the GUI is limited to 140 samples per second to minimize the memory and increase the connection speed. In the remote SMA experiment with LabVIEW, the sampling rate is 1000 samples per second; however, the hysteresis of SMA has been successfully demonstrated by the data recorded in the new remote SMA experiment with the novel unified framework, which matches the original results collected locally. The study compares two different implementation approaches for the remote SMA experiment; one is the traditional approach with the LabVIEW remote panel and the other is the new approach with the novel unified framework. The difference of these two solutions is listed, and the advantage of the new SMA remote experiment based on the novel unified framework is presented. The capability of running remote experiments on portable devices allows users to learn by observing and interacting with the real experiment in an efficient way.
-
-
-
Semantic Web Based Execution-time Merging Of Processes
Authors: Borna Jafarpour and Syed Sibte Raza AbidiA process is a series of actions executed in a particular environment in order to achieve a goal. It is often the case that several concurrent processes coexist in an environment in order to achieve several goals simultaneously. However, executing multiple processes is not always a possibility in an environment due to the following reasons: (1) All processes might be needed to be executed by a single agent that is not capable of executing more than one process at a time; (2) Multiple processes may have interactions between them that hamper their concurrent executions by multiple agents. As an example, there might be conflicting actions between several processes that their concurrent execution will stop those processes from achieving their goals. The existing solution to address the abovementioned complications is to merge several processes into a unified conflict-free and improved process before execution. This unified merged process is then executed by a single agent in order to achieve goals of all processes. However, we believe this is not the optimal solution because (a) in some environments, it is unrealistic to assume execution of all processes merged into one single process can be delegated to a single agent; (b) since merging is performed before actual execution of the unified process, some of the assumptions made regarding execution flow in individual processes may not be true during actual execution which will render the merged process irrelevant. In this paper, we propose a semantic web based solution to merge multiple processes during their concurrent execution in several agents in order to address the above-mentioned limitations. Our semantic web Process Merging Framework features a Web Ontology Language (OWL) based ontology called Process Merging Ontology (PMO) capable of representing a wide range of workflow and institutional Process Merging Constraints, mutual exclusivity relations between those constraints and their conditions. Process Merging Constraints should be respected during concurrent execution of processes in several agents in order to achieve execution-time process merging. We use OWL axioms and Semantic Web Rule Language (SWRL) rules in the PMO to define the formal semantics of the merging constraints. A Process Merging Engine has also been developed to coordinate several agents, each executing a process pertaining to a goal, to perform process merging during execution. This engine runs the Process Merging Algorithm that utilizes Process Merging Execution Semantics and an OWL reasoner to infer the necessary modifications in actions of each of the processes so that Process Merging Constraints are respected. In order to evaluate our framework we have merged several clinical workflows each pertaining to a disease represented as processes so that they can be used for decision support for comorbid patients. Technical evaluations show efficiency of our framework and evaluations with the help of domain expert shows expressivity of PMO in representation of merging constraints and capability of Process Merging Engine in successful interpretation of the merging constraints. We plan to extend our work to solve problems in business process model merging and AI plan merging research areas.
-
-
-
Performance Of Hybrid-access Overlaid Cellular Mimo Networks With Transmit Selection And Receive Mrc In Poisson Field Interference
Authors: Amro Hussen, Fawaz Al Qahtani, Mohamed Shaqfeh, Redha M. Radaydeh and Hussein AlnuweiriThis paper analyzes the performance of a hybrid control access scheme for small cells in the context of two-tier cellular networks. The analysis considers MIMO transmit/receive arrays configuration that implements transmit antenna selection (TAS) and maximal ratio combining (MRC) under Rayleigh fading channels when the interfering sources are described using Poisson field processes. The adopted models of aggregate interference at each receive station is modeled as a shot noise that follows a Stable distribution. Furthermore, based on the interference awareness at the receive station, two TAS approaches are considered through the analysis, which are the signal-to-noise (SNR)-based selection and signal-to-interference-plus-noise ratio (SINR)-based selection. In addition, the effect of delayed TAS due to imperfect feedback channel on the performance measures is investigated. New analytical results the hybrid-access scheme's downlink outage probability and error rate are obtained. To gain further insight on the system's behavior at limiting cases, asymptotic results for the outage probability and error rate at high signal-to-noise (SNR) are also obtained, which can be useful to describe diversity orders and coding gains. The derived analytical results are validated via Monte Carlo simulations.
-
-
-
Oryx Gtl Data Integration And Automation System For 21st Century Environmental Reporting
Authors: Sue Sung, Pon Saravanan Neerkathalingam, Ismail Al-khabani, Kan Zhang and Arun KanchanORYX GTL is an environmentally responsible company committed to creating an efficient, diversified energy business, developing its employees, and adding value to Qatar's natural resources. ORYX GTL considers the monitoring and reporting consistent environmental data and setting accurate targets is a critical component to increase the operational efficiency on sustainable manner. Monitoring key metrics such as air emissions, criteria pollutants, flaring and energy can provide an opportunity to reduce the environmental impacts and cost savings. ORYX GTL has adopted a state-of-art information technology (IT) solution to enhance the data handling process in support of the company's environmental performance reports such as the greenhouse gas (GHG) accounting and reporting (A&R) program required by Qatar Petroleum (QP). The automated system to report environmental data is proven to be more efficient and accurate which also increases consistency & requires fewer resources to report the data in a reliable manner. The system selected by ORYX GTL is the Data Integration and Automation (DIA) system designed developed by Trinity Consultants on the Microsoft® .net platform. The objective of this paper is to share the challenges and experience during the design, develop and implement this advanced DIA system for critical environmental reporting functions at ORYX GTL as a part of the company's commitment to improve environmental performance. The DIA application can be used as the central data storage/handling system for all environmental data reporting. The DIA software includes several functions built on a state-of-art IT platform to achieve near real-time environmental monitoring, performance tracking, and reporting. The key functions include: -Hourly data retrieval, aggregation, validation, and reconciliation from the plant process historian on a pre-defined schedule. The data retrieved from the process historian may include data such as hourly fuel usage and continuous emission monitoring data, and sampling data collected on routine basis. -Powerful calculation engine allows user to build complex emission calculation equations. Calculated results are stored in the database for use in reporting. In addition to user specified equations, the system also includes a complete calculation module to handle complex calculations for tank emissions. -Through the web interface of the DIA, users can manage system reporting entity hierarchy and user security, set up tags/sources, create manual data entries, create/modify equations, and execute emission reports. The DIA application sends email notifications of errors of tag data and calculation results at user specified intervals. Email recipients can provide timely response to the system with proper root causes and corrective actions. -Custom reports can be designed to generate regulatory reports in the format required by QP or the Qatar Ministry of Environment. The DIA system has significantly enhanced the quality of ORYX GTL's environmental reporting by reducing human interactions required for process data extraction, validation, reconciliation, calculations, and reporting. ORYX GTL's proactive approach to implement and integrate DIA system provided the opportunity to improve reporting functions and stakeholder & regulator satisfaction as well as it ensures the principles of environmental data reporting such as complete, consistent, transparent and accurate.
-
-
-
An Integrated Framework For Verified And Fault Tolerant Software
Authors: Samir Elloumi, Ishraf Tounsi, Bilel Boulifa, Sharmeen Kakil, Ali Jaoua and Mohammad SalehFault tolerance techniques should let the program continue servicing in spite of the presence of errors. They are of primary importance mainly in case of mission-critical systems. Their eventual failure may produce important human and economic casualties. For these reasons, researchers have assigned the software reliability as an important research area in terms of checking its design and functionality. As a matter of fact, software testing aims to increase the software correctness by verifying the program outputs w.r.t an input space generated in a bounded domain. Also, the fault tolerance approach has many effective error detection mechanisms as per as the Backward recovery, Forward recovery or redundancy algorithm. Our work consists of developing an integrated approach for software testing in a bounded domain. It tolerates transient faults to solve deficiencies and to obtain a robust and well-designed program. The developed framework comprises two types of tests: i) Semi-automatic test that enables the user to check the software by manually entering the values of the method and testing with specified values, ii) Automatic test that computerizes the test with the prepared instances of the program and generated values of a chosen method that exists inside the software. For generating the input values of a program, we have involved “Korat” that requires a class invariant, a bounded domain and Java Predicates (or preconditions). The framework uses the reflection technique in order to verify the correctness of the method under test. Based on the pre-post conditions, or Java predicates, previously fixed by the user, the backward recovery and the Forward recovery algorithm are applied to tolerate the transient faults. In case of Forward recovery, an efficient original solution has been developed based on reducing the number of re-executing a bloc of instructions. In fact, the re-execution is started from the current state instead of the initial state under the hypothesis of no loss of critical information. A plugin Java library has been implemented for fault tolerant version. The Framework was experimented for several java programs and was applied for improving the robustness of the Gas purification software. ACKNOWLEDGMENT: This publication was made possible by a grant from the Qatar National Research Fund through National Priority Research Program (NPRP) No. 04-1109-1-174. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the Qatar National Research Fund or Qatar University
-
-
-
Car Make And Model Detection System
Authors: Somaya Al-maadeed, Rayana Boubezari, Suchithra Kunhoth and Ahmed BouridaneThe deployment of highly intelligent and efficient machine vision systems accomplished to achieve new heights in multiple fields of human activity. A successful replacement of manual intervention with their automated systems assured safety, security and alertness in the transportation field. Automatic number plate recognition (ANPR) has become a common aspect of the intelligent transportation systems. In addition to the license plate information, identifying the exact make and model of the car is suitable to provide many additional cues in certain applications. Authentication systems may find it useful with an extra confirmation based on the model of the vehicle also. Different car models are characterized by the uniqueness in the overall car shape, position and structure of headlights etc. Majority of the research works rely on frontal/rear view of car for the recognition while some others are also there based on an arbitrary viewpoint. A template matching strategy is usually employed to find an exact match for the query image from a database of known car models. It is also possible to select and extract certain discriminative features from the region of interest (ROI) in the car image. And with the help of a suitable similarity measure such as euclidean distance it is able to demarcate between the various classes/models. The main objective of the paper is to understand the significance of certain detectors and descriptors in the field of car make and model recognition. The performance evaluation of SIFT, SURF, ORB feature descriptors for implementing a car recognition system was already available in literature. In this paper, we have studied the effectiveness of various combinations of feature detectors and descriptors on car model detection. The combination of the 6 detectors DoG, Hessian, Harris Laplace, Hessian Laplace, Multiscale Harris, Multiscale Hessian with the 3 descriptors SIFT, liop and patch was tested on three car databases. Scale Invariant Feature Transform (SIFT), a popular object detection algorithm allows the user to match different images and spot the similarities between them. The algorithm based on keypoints selection and description offers feature independent of illumination, scale, noise and rotation variations. Matching between images has been executed using Euclidian distance between descriptors. For the given keypoints in the test image, the smallest Euclidian distance between corresponding descriptor and all the descriptors of the training image indicates the best match. Our experiments were carried out in MATLAB using the VLFeat ToolBox. It was found to achieve a maximum accuracy of 91.67% with DoG-SIFT approach in database 1 comprising cropped ROI of toy car images. For the database 2 consisting of cropped ROI of real car images, the Multiscale Hessian-SIFT yielded the maximum accuracy of 96.88%. The database 3 comprised of high resolution real car images with background. The testing was conducted on the cropped and resized ROI's of these images. A maximum accuracy of 93.78% was obtained when the Multiscale Harris-SIFT feature descriptor was employed. As a whole these feature detectors and descriptors succeeded in recognizing the car models with an overall accuracy above 90%.
-
-
-
Visual Simultaneous Localization And Mapping With Stereo And Wide-angle Imagery
Authors: Peter Hansen, Muhammad Emaduddin, Sidra Alam and Brett BrowningMobile robots provide automated solutions for a range of tasks in industrial settings including but not limited to inspection. Our interest is automated inspection tasks including gas leakage detection in natural gas processing facilities such as those in Qatar. Using autonomous mobile robot solutions remove humans from potentially hazardous environments, eliminate potential human errors from fatigue, and provide data logging solutions for visualization and off-line post-processing. A core requirement for a mobile robot to perform any meaningful inspection task is to localize itself within the operating environment. We are developing a visual Simultaneous Localization And Mapping (SLAM) system for this purpose. Visual SLAM systems enable a robot to localize within an environment while simultaneously building a metric 3D map using only imagery from an on-board camera head. Vision has many advantages over alternate sensors used for localization and mapping. It requires minimal power compared to Lidar sensors, is relatively inexpensive compared to Inertial Navigation Systems (INS), and can operate in GPS denied environments. There is extensive work related to visual SLAM with most systems using either a perspective stereo camera head or a wide-angle of view monocular camera. Stereo cameras enable Euclidean 3D reconstruction from a single stereo pair and provide metric pose estimates. However, the narrow angle of view can limit pose estimation accuracy as visual features can typically be 'tracked' only across a small number of frames. Moreover, the limited angle of view presents challenges for place recognition whereby previously visited locations can be detected and loop closure performed to correct for long-range integrated position estimate inaccuracies. In contrast, wide-angle of view monocular cameras (e.g. fisheye and catadioptric) trade spatial resolution for an increased angle of view. This increased angle can enables visual scene points to be tracked over many frames and can improve rotational pose estimates. The increased angle of view can also improve visual place recognition performance as the same areas of a scene can be imaged under much larger changes in position and orientation. The primary disadvantage of a monocular wide-angle visual SLAM system is a scale ambiguity in the translational component of pose/position estimates. The visual SLAM system being developed in this work uses a combined stereo and wide-angle fisheye camera system with the aim of exploiting the advantages of each. For this we have combined visual feature tracks from both the stereo and fisheye camera within a single non-linear least-squares Sparse Bundle Adjustment (SBA) framework for localization. Initial experiments using large scale image datasets (approximately 10 kilometers in length) collected within Education City have been used to evaluate improvements in localization accuracy using the combined system. Additionally, we have demonstrated performance improvements in visual place recognition using our existing Hidden Markov Model (HMM) based place recognition algorithm.
-
-
-
Secured Scada System
More LessAbstract: SCADA ( Supervisory control and data acquisition) is a system which allows a control of remote industrial equipments, over a communication channel. These legacy communication channels were designed before the cyber space era,and hence they lack any security measures in their network which makes them vulnerable to any cyber attack. RasGas a joint venture between QP and ExxonMobil, was one victim of such attack, it was hit with an unknown virus. The nuclear facility in Iran was hit with a virus called “Stuxnet”, it particulary targets Siemens industrial control systems. The goal of this project is to design a model of a SCADA system that is secured against network attacks. Lets consider for example a simple SCADA system which consist of a Water tank in a remote location and a local control room. the operator controls the water level and temperature using a control panel. The communication channels uses a TCP/IP , protocols through WIFI. The operator raises the temperature of the water by raising the power of the heater, then reads the real temperature of the heater and the water via installed sensors. We consider a man-In-The middle (Adversary) which has access to the network through WIFI. With basic skills s/he is able to redirects tcp/ip traffic to his machine (tapping) and alter data. He can for instance raise water level to reach overflow, or increase the temperature above the "danger zone", and sends back fake sensors data by modifying their response. We introduce an encryption device that encrypt the data such that without the right security credentials , the adversary wont be able to interpret the data and hence not able to modify it. The device is installed at both the control room and the remote tank, and we assume both places are physically secured. To demonstrate the Model. We design and setup a SCADA Model emulator server, that represents and serves as a Water tank, which consists of actuators and sensors. Which is connected to a work station through a network switch. We also setup an adversary workstation that taps and alters the communication between them. We Design Two hardware encryption/decryption devices using FPGA boards and connect them at the ports of both the server and control workstation which we assume to be in a secured zone. and then we analyze the flow of data stream through both secured and non secured state of the channel.
-
-
-
Measurement Platform Of Mains Zero Crossing Period For Powerline Communication
Authors: Souha Souissi and Chiheb RebaiPower lines (PLC), mainly dedicated and optimized for delivering electricity, are, nowadays, used to transfer data. Its low cost and wide coverage makes it one of the pivotal technologies in building up smart grid. The actual role of PLC technology is controversial while some preconize that PLC systems are very good candidates for some applications others discard it and look at wireless as a more elaborated alternative. It is obvious that Smart Grid will include multiple types of communications technologies ranging from optics to wireless and wireline. Among wireline solutions, PLC appears to be the only technology that has deployment cost comparable to wireless as power installation and line are already existent. Narrowband PLCs are a key point in smart grid that is elaborated to support several applications such as Automatic Meter Reading (AMR), Advanced Metering Infrastructure (AMI), demand side management, in-home energy management and Vehicle-to-grid communications A critical stage in designing an efficient PLC system remains in getting sufficient knowledge about channel behavior characteristics such as attenuation, access impedance, multiple noise scenarios and synchronization. That's why; characterizing power line network has been the interest of several research works aiming to compromise between robustness of powerline communication and higher data rate. This interest in narrowband PLC systems to find the adequate one is inciting to deeply focus on channel characterization and modeling methods. It represents a first step to simulate channel behavior then propose a stand-alone hardware for emulation. Authors are investigating the building blocks of a narrowband PLC channel emulator that helps designers to evaluate and verify their systems design. It allows a reproduction of real conditions for any narrowband PLC equipment (single carrier or multicarrier) by providing three major functionalities: noise scenarios, signal attenuation and zero crossing reference mainly used for single carriers systems. For this purpose, authors deploy a bottom up approach to identify a real channel transfer function (TF) based on a prior knowledge of used power cables characteristics and connected loads. A simulator is, then, defined based on Matlab that generates a given TF according to defined parameters. The AC mains zero crossing variation is also studied. In field exhaustive measurements of this reference have shown a perpetual fluctuation presented as a jitter error. It is the reflection of variant AC mains characteristics (frequency, amplitude) which could be related to non-linearity of connected loads at network and used detection circuit in PLC systems. Authors propose a ZC variation model according to system environment (home/ lab, rural). This model will be embedded on channel emulator to reproduce ZC reference variation. Regarding noise, few models are found in literature, specific to narrowband PLCs. An implementation of some models is done and tested on a DSP platform which will include the two previous elements: TF and ZC variation.
-
-
-
Social Media As A Source Of Unbiased News
By Walid MagdyNews media are usually biased toward some political views. Also, the coverage of news is limited to news reported by news agencies. Social media is currently a hub for users to report and discuss news. This includes news reported or missed by news media. Developing a system that can generate news reports from social media can give a global unbiased view on what is hot in a given region. In this talk, we present the research work performed in QCRI for two years, which tackle the problem of using social media to track and follow posts on ongoing news in different regions and for different topics. Initially, we show examples of the presence of bias in reporting news by different news media. We then explore the nature of social media platforms and list the research questions that motivated this work. The challenges for tracking topics related to news are discussed. An automatically adapting information filtering approach is presented that allows tracking broad and dynamic topics in social media. This technique enables automatically tracking posts on news in social media while coping with the high changes occurring in news stories. Our developed system, TweetMogaz, is the demoed, which is an Arabic news portal platform that generated news from Twitter. TweetMogaz reports in real-time what is happening in hot regions in the Middle East, such as Syria and Egypt, in the form of comprehensive reports that include top tweets, images, videos, and news article shared by users on Twitter. It also reports news on different topics such as sports. Moreover, Search is enabled to allow users to get news reports on any topic of interest. The demo would be showing www.tweetmogaz.com live, where emerging topics in news would appear live in front of the audience. By the end of the talk, we would show some of the interesting examples that were noticed on the website in the past year. In addition, a quick overview would be presented on one of the social studies, which was carried out based on the news trend changes on TweetMogaz. The study shows the changes of people behavior when reporting and discussing news during major political changes such as the one happened in Egypt in July 2013. This work is an outcome of two years of research in the Arabic Language Technology group in Qatar Computing Research Institute. The work is published in the form of six research and demo papers in tier 1 conferences such as SIGIR, CSCW, CIKM, and ICWSM. The TweetMogaz system is protected by two patent applications filed in 2012 and 2014. Currently the website serves around 10,000 users, and the number is expected to significantly increase when officially advertised. Please feel free to visit TweetMogaz website for checking the system live: www.tweetmogaz.com Note: A new release with a better design to the website is expected by the time of the conference
-
-
-
Semantic Model Representation For Human's Pre-conceived Notions In Arabic Text With Applications To Sentiment Mining
Authors: Ramy Georges Baly, Gilbert Badaro, Hazem Hajj, Nizar Habash, Wassim El Hajj and Khaled ShabanOpinion mining is becoming of high importance with the availability of opinionated data on the Internet and the different applications it can be used for. Intensive efforts have been made to develop opinion mining systems, and in particular for the English language. However, models for opinion mining in Arabic remain challenging due to the complexity and rich morphology of the language. Previous approaches can be categorized into supervised approaches that use linguistic features to train machine learning classifiers, and unsupervised approaches that make use of sentiment lexicons. Different features have been exploited such as surface-based, syntactic, morphological, and semantic features. However, the semantic extraction remains shallow. In this paper, we propose to go deeper into the semantics of the text when considered for opinion mining. We propose a model that is inspired by the cognitive process that humans follow to infer sentiment, where humans rely on a database of preconceived notions developed throughout their life experiences. A key aspect for the proposed approach is to develop a semantic representation of the notions. This model consists of a combination of a set of textual representations for the notion (Ti), and a corresponding sentiment indicator (Si). Thus
denotes the representation of a notion. However, notions can be constructed at different levels of text granularity ranging from ideas covered by words to ideas covered in full documents. The range also includes clauses, phrases, sentences, and paragraphs. To demonstrate the use of this new semantic model of preconceived notions, we develop the full representation of one-word notions by including the following set of syntactic features for Ti: word surfaces, stems, and lemmas represented by binary presence and TFIDF. We also include morphological features such as part of speech tags, aspect, person, gender, mood, and number. As for the notion sentiment indicator Si, we create a new set of features that indicate the words' sentiment scores based on an internally-developed Arabic sentiment lexicon called ArSenL, and using a third-party lexicon called Sifaat. The aforementioned features are extracted at the word-level, and are considered as raw features. We also investigate the use of additional "engineered" features that reflect the aggregated semantics of a sentence. Such features are derived from word-level information, and include count of subjective words, average of sentiment scores per sentence. Experiments are conducted on a benchmark dataset collected from the Penn Arabic TreeBank (PATB) already annotated with sentiment labels. Results reveal that raw word-level features do not achieve satisfactory performance in sentiment classification. Feature reduction was also explored to evaluate the relative importance of the raw features, where the results showed low correlations between individual raw features and sentiment labels. On the other hand, the inclusion of engineered features had a significant impact on classification accuracy. The outcome of these experiments is a comprehensive set of features that reflect the one-word notion or idea representation in a human mind. The results from one-word also show promises towards higher level context with multi-word notions.
-
-
-
Intelligent M-Health Technology For Enhanced Smoking Cessation Management
Authors: Abdullah Alsharif and Nada PhilipAbstract Smoking-related illnesses are costly to the NHS and a leading cause of morbidity and mortality. Pharmacological treatments including nicotine replacement, some antidepressants, and nicotine receptor partial agonists, as well as individual- and group-based behavioural approaches, can help stop people from smoking. Circa 40% of smokers attempt to quit smoking each year, yet most have rapid relapses. The development of new tools acceptable by a wide range of smokers should be of particular interest. Smartphone interventions such as text messaging have shown some promise in helping people stop smoking. However most of these studies were based on text-messaging interventions with no interactive functionality that can provide better feedback to the smoker. In addition there is increasing evidence that smart mobile phones act as a conduit to behavioural change in other forms of healthcare. A study of currently available iPhone apps for smoking cessation have shown a low level of adherence to key guidelines for smoking cessation; few, if any, recommended or linked the user to proven treatments such as pharmacotherapy, counselling or a “quit line” and smoking cessation program. Hence there is a need for clinical validation of the feasibility of app-based intervention in supporting smoking cessation programmes in community pharmacy settings. The goal of this study is to design and develop an m-health programme platform to support smoking cessation in a community setting. The primary objectives are ascertaining what users require from a mobile app-based smoking cessation system targeting and supporting smokers, and looking into the literature for similar solutions. The study also involves the design and development of an end-to-end smoking cessation management system based on these identified needs; this includes the Patients Hub, Cloud Hub, and Physician/Social Worker Hub, as well as the design and development of a decision support system based on data mining and an artificial intelligent algorithm. Finally, it will implement the system and evaluate it in a community setting.
-
-
-
MobiBots: Risk Assessment Of Collaborative Mobile-to-Mobile Malicious Communication
Authors: Abderrahmen Mtibaa, Hussein Alnuweiri and Khaled HarrasCyber security is moving from traditional infrastructure to sophisticated mobile infrastreless threats. We believe that such imminent transition is happening at a rate exceeding by far the evolution of security solutions. In fact, the transformation of mobile devices into highly capable computing platforms makes the possibility of security attacks originating from within the mobile network a reality. All recent security report emphasize on the steadily increase of malicious mobile applications. Trend Micro, in their last security report, shows that the number of malicious application doubled in just six months to reach more than 700000 malwares in June 2013. This represents a major issue for today's cyber security in the world and particularly in the middle east. The last Trend Micro report shows that the United Arab Emirates has “by far” the highest malicious Android application download volume worldwide. Moreover, Saudi Arabia, another middle eastern country, register the highest downloads of high-risk applications. We believe that today mobile devices are capable of initiating sophisticated cyberattacks especially when they coordinate together forming what we call a mobile distributed botnet (MobiBot). MobiBots leverage the absence of basic mobile operating system security mechanism and the advantages of classical botnets which make them a serious security threat to any machine and/or network. In addition, MobiBot's distributed architecture (see attached figure), its communication model, and its mobility make it very hard to track, identify and isolate. While there has been many android security studies, we find that the proposed solutions can not be adopted in the challenging MobiBot environment due to its de-centralized architecture (figure). MoBiBots bring significant challenges to network security. Thus, securing mobile devices by vetting malicious tasks can be considered as one important first step towards MobiBot security. Motivated by the trends mentioned above, in our project we first investigate the potential for and impact of the large scale infection and coordination of mobile devices. We highlight how mobile devices can leverage short range wireless technologies in attacks against other mobile devices that come within proximity. We quantitatively measure the infection and propagation rates within MobiBots using short range wireless technology such as Bluetooth. We adopt an experimental approach based on a Mobile Device Cloud platform we have build as well as three real world data traces. We show that Mobibot infection can be really fast by infecting all nodes in a network in only few minutes. Stealing data however requires longer period of time and can be done more efficiently if the botnet utilizes additional sinks. We also show that while MobiBots are difficult to detect and isolate compared to common botnet networks, traditional prevention techniques costs at least 40% of the network capacity. We also study the scalability of MobiBots in order to understand the strengths and weaknesses of these malicious networks. We based our analysis on a dataset that consists of multiple disjoint communities, each one is a real world mobility trace. We show that MobiBots succeed on infecting up to 10K bots in less than 80 minutes.
-
-
-
The Infrastructure Of Critical Infrastructure: Vulnerability And Reliability Of Complex Networks
By Martin Saint* Background & Objectives All critical infrastructure can be modeled as networks, or systems of nodes and connections, and many systems such as the electric grid, water supply, or telecommunications exist explicitly as networks. Infrastructures are interdependent, for instance, telecommunications depend on electric power, and control of the electric grid depends increasingly upon telecommunications, creating the possibility for a negative feedback loop following a disturbance. The performance of these systems under disturbance are related to their inherent network characteristics, and network architecture plays a fundamental role in reliability. What characteristics of networks affect their robustness? Could, for instance, the vulnerability of the electric grid to cascading failure be reduced? * Methods We create a failure model of the network where each node and connection is initially in the operative state. At the first discrete time step a network element is changed to the failed state. At subsequent time steps a rule is applied which determines the state of random network elements based upon the state of their neighbors. Depending upon the rule and the distribution of the degree of connectedness of the network element, failures may be contained to a few nodes or connections, or may cascade until the entire network fails. * Results Quantitative measures from the model are the probability of network failure based upon the loss of a network element, and the expected size distribution of failure cascades. Additionally, there is a critical threshold below which infrastructure networks fail catastrophically. The electrical grid is especially vulnerable as it operates close to the stability limit, and there is a low critical threshold after which the network displays a sharp transition to a fragmented state. Failures in the electrical grid result not only in the loss of capacity in the network element itself, but load shifting to adjacent network elements, which contributes to further instability. While most failures are small, failure distributions are heavy tailed indicating occasional catastrophic failure. Many critical infrastructure networks are robust to random failure, but the existence of highly connected hubs give them a high clustering coefficient which makes the network vulnerable to targeted attacks. * Conclusions It is possible to design network architectures which are robust to two different conditions: random failure and targeted attack. It is also possible to alter architecture to increase the critical threshold at which failed network elements cause failure of the network as a whole. Surprisingly, adding more connections or capacity sometimes reduces robustness by creating more routes for failure to propagate. Qatar is in an ideal position to analyze and improve critical infrastructure from a systemic perspective. Modeling and simulation as detailed above are readily applicable to analyzing real infrastructure networks.
-