- Home
- Conference Proceedings
- Qatar Foundation Annual Research Forum Proceedings
- Conference Proceeding
Qatar Foundation Annual Research Forum Volume 2011 Issue 1
- Conference date: 20-22 Nov 2011
- Location: Qatar National Convention Center (QNCC), Doha, Qatar
- Volume number: 2011
- Published: 20 November 2011
151 - 200 of 281 results
-
-
Massive Parallel Simulation of Motion of Nano-Particles at the Near-Wall Region in a Micro-Fluidics System
Authors: Othmane Bouhali, Reza Sadr and Ali SheharyarAbstractOne of the major challenges in Computational Fluid Dynamics (CFD) is limitation in the available computational speeds, especially when it comes to N-body problems. Application of the graphics processing units (GPU) are considered as an alternative to the traditional CPU usage for some CFD applications to fully utilize the computational power and parallelism of modern graphics hardware. In the present work a Matlab simulation tool has been developed to study the flow at the wall-fluid interface at nano scale in a micro-fluidic system. Micro-fluidics has become progressively important in recent years in order to address the demand for increased efficiency in a wide range application in advanced systems such as “lab-on-a-chip”. Lab-on-a-chip devices are miniature micro fluidics (labs) to perform biological test such as proteomics, or chemical analysis/synthesis of very exothermic reactions. For simulation in such cases, it takes few hours to simulate a particular set of parameters even when run in parallel mode at the TAMUQ supercomputer. In order to accelerate the simulation, the Matlab program has been ported to C/C++ program that exploits the GPU's (Graphics Processing Unit) massive parallel processing capabilities. Results from both methods will be shown and conclusions be drawn.
-
-
-
Statistical Mixture-Based Methods and Computational Tools for High-Throughput Data Derived in Proteomics and Metabolomics Study
Authors: Halima Bensmail, James Wicker and Lotfi ChouchaneAbstractQatar is accumulating substantial local expertise in biomedical data analytics. In particular, QCRI is forming a scientific computing multidisciplinary group with a particular interest in machine learning, statistical modeling and bioinformatics. We are now in a strong position to address the computational needs of biomedical researchers in Qatar, and to prepare a new generation of scientists with a multidisciplinary expertise.
The goal of genomics, proteomics and metabolomics is to identify, and characterize the function of genes, proteins, and small molecules that participate in chemical reactions, and are essential for maintaining life. This research area expands rapidly and holds a great promise in the discovery of risk factors and potential biomarkers of diseases such as obesity and diabetes, the two areas of increasing concern in Qatar population.
In this paper, we develop new statistical modeling techniques of clustering based on mixture models with model selection of large biomedical datasets (proteomics and metabolomics). Deterministic and Bayesian approach are used. The new approach is formulated within the multivariate mixture-model cluster analysis to handle both normal (Gaussian) and non-normal (non-Gaussian) large dimensional data.
To choose the number of component mixture clusters we develop the model selection with information measure of complexity (ICOMP) criterion of the estimated inverse-Fisher information matrix. We have promising preliminary results which, suggest the use of our algorithm to identify obesity susceptibility genes in humans in a genome-wide association study and in Mass spectrum data generated for adipocyte tissue for an obesity study.
-
-
-
Annotating a Multi-Topic Corpus for Arabic Natural Language Processing
Authors: Behrang Mohit, Nathan Schneider, Kemal Oflazer and Noah A. SmithAbstractHuman-annotated data is an important resource for most natural language processing (NLP) systems. Most linguistically annotated text for Arabic NLP is in the news domain, but systems that rely on this data do not generalize well to other domains. We describe ongoing efforts to compile a dataset of 28 Arabic Wikipedia articles spanning four topical domains—sports, history, technology, and science. Each article in the dataset is annotated with three types of linguistic structure: named entities, syntax and lexical semantics. We adapted traditional approaches to linguistic annotation in order to make them accessible to our annotators (undergraduate native speakers of Arabic) and to better represent the important characteristics of the chosen domains.
For the named entity (NE) annotation, we start with the task of marking boundaries of expressions in the traditional Person, Location and Organization classes. However, these categories do not fully capture the important entities discussed in domains like science, technology, and sports. Therefore, where our annotators feel that these three classes are inadequate for a particular article, they are asked to introduce new classes. Our data analysis indicates that both the designation of article-specific entity classes and the token-level annotation are accomplished with a high level of inter-annotator agreement.
Syntax is our most complex linguistic annotation, which includes morphology information, part-of-speech tags, syntactic governance and dependency roles of individual words. While following a standard annotation framework, we perform quality control by evaluating inter-annotator agreement as well as eliciting annotations for sentences that have been previously annotated so as to compare the results.
The lexical semantics annotation consists of supersense tags, coarse-grained representations of noun and verb meanings. The 30 noun classes include person, quantity, and artifact; the 15 verb tags include motion, emotion, and perception. These classes provide a middle-ground abstraction of the large semantic space of the language. We have developed a flexible web-based interface, which allows annotators to review preprocessed text and add the semantic tags.
Ultimately, these linguistic annotations will be publicly released, and we expect that they will facilitate NLP research and applications for an expanded variety of text domains.
-
-
-
Challenges and Techniques for Dialectal Arabic Speech Recognition and Machine Translation
Authors: Mohamed Elmahdy, Mark Hasegawa-Johnson, Eiman Mustafawi, Rehab Duwairi and Wolfgang MinkerAbstractIn this research, we propose novel techniques to improve automatic speech recognition (ASR) and statistical machine translation (SMT) for dialectal Arabic. Since dialectal Arabic speech resources are very sparse, we describe how existing Modern Standard Arabic (MSA) speech data can be applied to dialectal Arabic acoustic modeling. Our assumption is that MSA is always a second language for all Arabic speakers, and in most cases we can identify the original dialect of a speaker even though he is speaking MSA. Hence, an acoustic model trained with sufficient number of MSA speakers will implicitly model the acoustic features for the different Arabic dialects. Since, MSA and dialectal Arabic do not share the same phoneme set, we propose phoneme sets normalization in order to crosslingually use MSA in dialectal Arabic ASR. After normalization, we applied state-of-the-art acoustic model adaptation techniques to adapt MSA acoustic models with little amount of dialectal speech. Results indicate significant decrease in word error rate (WER). Since it is hard to phonetically transcribe large amounts of dialectal Arabic speech, we studied the use of graphemic acoustic models where phonetic transcription is approximated to be word letters instead of phonemes. A large number of Gaussians in the Gaussian mixture model is used to model missing vowels. In the case of graphemic adaptation, significant decrease in WER was also observed. The approaches were applied with Egyptian Arabic and Levantine Arabic. The reported experimental work was performed while the first author was at the German University in Cairo in collaboration with Ulm University. This work will be extended at Qatar University in collaboration with the University of Illinois to cover ASR and SMT for Qatari broadcast TV. We propose novel algorithms for learning the similarities and differences between Qatari Arabic (QA) and MSA, for purposes of automatic speech translation and speech-to-text machine translation, building on our own definitive research in the relative phonological, morphological, and syntactic systems of QA and MSA, and in the application of translation to interlingual semantic parse. Furthermore, we propose a novel efficient and accurate speech-to-text translation system, building on our research in landmark-based and segment-based ASR.
-
-
-
Pipeline Inspection using Catadioptric Omni-directional Vision Systems
Authors: Othmane Bouhali, Mansour Karkoub and Ali SheharyarAbstractOil and gas companies spend millions of dollars on inspection of pipelines every year. The type of equipment used in carrying out the inspections is very sophisticated and requires very specialized manpower. In this article we present a novel approach to pipeline inspection using a small mobile robot and an omnidirectional vision system. It is a combination of a reflective convex mirror and a perspective camera. Panoramic videos captured by the camera from the mirror are either stored or transmitted to a monitoring station to examine the inner surface of the pipeline. The videos are prepped, unwrapped, and unwarped to get a realistic image of the whole inner surface area of the pipeline. Several mirror shapes and unwarping techchniques are used here to show the efficiency of this inspection technique. The image acquisition is instantaneous with a large field of view, which makes their use in dynamic environment suitable. The cost of such system is relatively low compared to other available inspections systems.
-
-
-
Development of a Telerobotic System to Assist the Physically Challenged Using Non-Contact Vision-Based Sensing and Command
Authors: Mansour Karkoub and M-G. HerAbstractIt is often a problem for a physically challenged person to perform a simple routine task such as eating, moving around, and picking up things on a shelf. Usually, these tasks require assistance from a capable person. However, this total or partial reliance on others for daily routines may be bothersome to the physically challenged and diminishes their self-esteem. Moreover, getting around in a wheel chair, for example, requires the use of some form of a joystick, which is usually not so user-friendly. At Texas A&M Qatar, we developed two vision-based motion detection and actuation systems, which can be used to control the motion of a wheel chair without the use of a joystick and remotely control the motion of a service robot for assistance with daily routines. The first vision system detects the orientation of the face whereas the second detects the motion of color tags placed on the person's body. The orientation of the face and the motion of the color tags are detected using a CCD camera and could be used to command the wheel chair and the remote robot wirelessly. The computation of the color tags’ motion is achieved through image processing using eigenvectors and color system morphology. Through inverse dynamics and coordinate transformation, the motion of the operator's head, limbs, and face orientation could be computed and converted to the appropriate motor angles on the wheelchair and the service robot. Our initial results showed that it takes, on average, 65 milliseconds per calculation. The systems performed well even in complex environments with errors that did not exceed 2 pixels with a response time of about 0.1 seconds. The results of the experiments are available at:
http://www.youtube.com/watch?v=5TC0jqlRe1U, http://www.youtube.com/watch?v=3sJvjXYgwVo, http://www.youtube.com/watch?v=yFxLaVWE3f8, and http://www.youtube.com/watch?v=yFxLaVWE3f8.
It is our intent to implement the vision based sensing system on an actual wheelchair and service robot and test it using a physically challenged person.
-
-
-
A Simulation Study of Underwater Acoustic Communications in the North Field of Qatar
Authors: Bahattin Karakaya, Mazen Omar Hasna, Murat Uysal, Tolga Duman and Ali GhrayebAbstractQatar is a leading natural gas producer and exporter in the world. Most of the natural gas (and oil) of Qatar is extracted from offshore wells, and then it is transferred to onshore for processing. In addition, Qatar is connected to UAE by one of the world's longest underwater pipelines (managed by Dolphin Energy), to transfer processed gas from the offshore north field to the UAE. Security of such critical offshore infrastucture against threats along with the environmental and preventive maintenance monitoring (e.g., pollution, leakage) are of utmost importance. A wireless underwater sensor network can be deployed for the security and safety of underwater pipelines. However, underwater acoustic communication brings its own challenges such as limited transmission range, low data rates and link unreliability.
In this paper, we propose “cooperative communication” as an enabling technology to meet the challenging demands in underwater acoustic communication (UWAC). Specifically, we consider a multi-carrier and multi-relay UWAC system and investigate relay (partner) selection rules in a cooperation scenario. For relay selection, we consider different selection criteria, which rely either on the maximization of signal-to-noise ratio (SNR) or the minimization of probability of error (PoE). These are used in conjunction with so-called per-subcarrier, allsubcarriers, or subcarrier grouping approaches in which one or more relays are selected.
In our simulation study, we choose an offshore area in the North Eastern side of Qatar (which coincides with the North Field) and conduct an extensive Monte Carlo simulation study for the chosen location to demonstrate the performance of the proposed UWAC system. Our channel model builds on an aggregation of both large-scale path loss and small-scale fading. For acoustic path loss modeling, we use the ray-tracing algorithm Bellhop software to precisely reflect the characteristics of the simulation location such as the sound speed profile, sound frequency, bathymetry, type of bottom sediments, depths of nodes, etc (See Fig.1 ). Our simulation results for the error rate performance have demonstrated significant performance improvements over direct transmission schemes and highlighted the enhanced link reliability made possible by cooperative communications (See Fig.2 ).
-
-
-
Video Aggregation: Delivering Videos over Wireless Smart Camera Networks
Authors: Vinay Kolar, Vikram Munishwar, Nael Abu-Ghazaleh and Khaled HarrasAbstractThe proliferation of wireless technologies and inexpensive network-cameras has enabled low-cost and quick deployment of cameras for several surveillance applications, such as traffic monitoring and border control. Smart Camera Networks (SCNs) are networks of cameras that self configure and adapt to improve their operation and reduce the demand on human operators. However, SCNs are constrained by the ability of the underlying wireless network. Streaming video over a network requires substantial bandwidth, and strict Quality-of-Service (QoS) guarantees. In contrast, existing wireless networks have limited bandwidth, and the protocols do not guarantee QoS. Thus, for SCNs to scale beyond a small number of cameras, it is vital to design efficient video delivery protocols that are aware of the limitations of the underlying wireless network.
We propose to use Video Aggregation, a technique that enables efficient delivery of video in SCNs by combining related video streams. Existing SCNs use traditional routing protocols where intermediate network routers simply forward the video packets from cameras towards the video analysis center (or base-station). This is inefficient in SCNs since multiple cameras often cover overlapping regions of interest, and video information of these regions are redundantly transmitted over the network. The proposed video aggregation protocol eliminates redundant transmissions by dynamically pruning the overlapping areas at the intermediate routers. The routers blend the received streams into one panoramic video stream with no overlaps. Aggregation also dynamically controls the streaming rate to avoid network congestion and packet drops; the routers adjust the rate of the outgoing video by estimating the available network bandwidth. Thus, base-stations receive video frames with minimal packet drops, thus improving the quality of received video.
Our testbed and simulation results show that aggregation outperforms traditional routing both in terms of received video quality and network bandwidth usage. Our testbed experiments show that aggregation improves the received video quality (the Peak Signal-to-Noise-Ratio metric) by 54%. In larger networks, we observed that aggregation eliminates up to 90% of packet drops that were observed in SCNs with traditional routing. In future, we plan to develop a suite of video delivery protocols, which include SCN-aware scheduling and transport protocols.
-
-
-
Efficient Sequence Alignment Using MapReduce on the Cloud
More LessAbstractOver the past few years, advances in the field of molecular biology and genomic technologies have led to an explosive growth of digital biological information. The analysis of this large amount of data is commonly based on the extensive and repeated use of conceptually parallel algorithms, most notably in the context of sequence alignment. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Cloud computing model is excellent in dealing with such bioinformatics applications, which require both management of huge amounts of data and heavy computations.
The study aims at transforming a recently developed bioinformatics sequence alignment tool, named BFAST, to the cloud environment. The MapReduce version of the BFAST tool will be used to demonstrate the effectiveness of the MapReduce framework and the cloud-computing model in handling the intensive computations and management of the huge bioinformatics data.
A number of existing tools and technologies are utilized in this study to achieve an efficient transformation of the BFAST tool into the cloud environment. The implementation is mainly based on two core components; BFAST and MapReduce. BFAST is a software package for aligning next generation genomic reads against a target genome with a very high accuracy and reasonable speed. MapReduce general-purpose parallelization technology [in its open source implementation, Hadoop] appears to be particularly well adapted to the intensive computations and huge data storage tasks involved in the BFAST sequence alignment tool.
The MapReduce version of the BFAST tool is expected to offer better results than the original one in terms of maintaining good computational efficiency, accuracy, scalability, deployment and management efforts.
The study demonstrates how a general-purpose parallelization technology, i.e. MapReduce running on the cloud, can be tailored to tackle the class of bioinformatics problems with good performance and scalability, and, more importantly, how this technology could be the basis of a computational parallel platform for several problems in the context of bioinformatics. Although the effort of transforming existing bioinformatics algorithms from local compute infrastructure is not trivial, the speed and flexibility of cloud computing environments provide a substantial boost with manageable cost.
-
-
-
Realistic Face and Lip Expressions for a Bilingual Humanoid Robot
Authors: Amna Alzeyara, Majd Sakr, Imran Fanaswala and Nawal BehihAbstractHala is a bi-lingual (Arabic and English) robot receptionist located at Carnegie Mellon in Qatar. Hala is presented to users as a 3-D animated face on a screen. Users type to her, and she replies in speech synced with realistic facial expressions. However there are two existing problems with the robot. First, Hala's animation engine does not fully adhere to existing research on face dynamics, which makes it difficult to create natural and interesting facial expressions. Natural expressions help towards an engaging user experience by articulating non-verbal aspects (ex: confusion, glee, horror, etc). Second, while speaking in Arabic lip-movements are not realistic because they were adopted from English utterances. In this work we address these two limitations.
Similar to the movie and video-game industry, we leverage Paul Ekman's seminal work on Facial Action Coding System (FACS) to demarcate Hala's 3D face model into muscle-primitives. These primitives are used to compose complex, yet natural, facial expressions. We have also authored an in-house tool, which allows non-programmers (for ex: artists) to manipulate the face in real-time to create expressions.
The sounds humans make while talking are symbolically captured as “phonemes”. The corresponding shapes of the lips, for these sounds (i.e. phonemes), are called “visemes”. We used existing research and observed each other (and a mirror), to develop visemes that accurately capture Arabic pronunciations. Hala can thus utilize English and Arabic visemes for accurate lip-movement and syncing. We empirically tested and evaluated our work by comparing it with previous lip movements for common Arabic utterances. Nonetheless, certain pronunciations can fire less-than-ideal visemes if they are preceded by silence.
Upon identifying and addressing these limitations, Hala has 11 new facial expressions for a more natural looking and behaving robot. This work also pioneered the first implemented subset of Arabic visemes on a robot.
-
-
-
Interference-Aware Spectrum Sharing Techniques for Next Generation Wireless Networks
Authors: Marwa Khalid Qaraqe, Ziad Bouida, Mohamed Abdallah and Mohamed-Slim AlouiniAbstractBackground: Reliable high-speed data communication that supports multimedia application for both indoor and outdoor mobile users is a fundamental requirement for next generation wireless networks and requires a dense deployment of physically coexisting network architectures. Due to the limited spectrum availability, a novel interference-aware spectrum-sharing concept is introduced where networks that suffer from congested spectrums (secondary-networks) are allowed to share the spectrum with other networks with available spectrum (primary-networks) under the condition that limited interference occurs to primary networks.
Objective: Multiple-antenna and adaptive rate can be utilized as a power-efficient technique for improving the data rate of the secondary link while satisfying the interference constraint of the primary link by allowing the secondary user to adapt its transmitting antenna, power, and rate according to the channel state information.
Methods: Two adaptive schemes are proposed using multiple-antenna transmit diversity and adaptive modulation in order to increase the spectral-efficiency of the secondary link while maintaining minimum interference with the primary. Both the switching efficient scheme (SES) and bandwidth efficient scheme (BES) use the scan-and-wait combining antenna technique (SWC) where there is a secondary transmission only when a branch with an acceptable performance is found; else the data is buffered.
Results: In both these schemes the constellation size and selected transmit branch are determined to minimized the average number of switches and achieve the highest spectral efficiency given a minimum bit-error-rate (BER), fading conditions, and peak interference constraint. For delayed sensitive applications, two schemes using power control are used: SES-PC and BES-PC. In these schemes the secondary transmitter sends data using a nominal power level, which is optimized to minimize the average delay. Several numerical examples show that the BES scheme increases the capacity of the secondary link.
Conclusion: The SES and BES schemes reach high spectral efficiency and BER performance at the expense of an increased delay. The SES-PC and BESPC minimize the average delay, satisfy the BER, and maintain a high spectral efficiency. The proposed power optimization and power control processes minimize the delay and the dropping probability especially if we extend the presented work to a multiuser scenario.
-
-
-
Conceptual Weighted Feature Extraction and Support Vector Model: A Good Combination for Text Categorization
Authors: Ali Mohamed Jaoua, Sheikha Ali Karam, Samir Elloumi and Fethi FerjaniAbstractWhile weighted features are known in information retrieval (IR) systems to be used for increasing recall during the document selection step, conceptual methods helped for finding good features. Starting from the features of a sample of Arabic news belonging to k different financial categories, and using the support vector model (SVM), k(k-1) classifiers are generated using one-against-one classification. A new document is submitted to k(k-1) different classifiers then by using the voting heuristic, is assigned to the most selected category. Categorization results obtained for two different methods for feature extraction: one based on the optimal concepts and the other based on isolated labels, proved that isolated labels generate better feature, because of the specificity of the selected features. Therefore, we can say that the quality of the feature, added to weighting methods, using SVM is an important factor for a more accurate classification. The proposed method based on isolated labels gives a good classification rate of Arabic news greater than 80% in the financial domain for five categories. Generalized to English Texts and to more categories, it becomes a good preprocessing filtering preceding automatic annotation step, and therefore helps for more accurate event structuring. Here attached a figure showing the different steps of the new categorization method.
-
-
-
Record Linkage and Fusion over Web Databases
Authors: Mourad Ouzzani, Eduard Dragut, El Kindi and Amgad MadkourAbstractMany data-intensive applications on the Web require integrating data from multiple sources (Web databases) at query time. Online sources may refer to the same real world entity in different ways and some may provide outdated or erroneous data. An important task is to recognize and merge the various references that refer to the same entity at query time. Almost all existing duplicate detection and fusion techniques work in the offline setting and, thus, do not meet the online constraint. There are at least two aspects that differentiate online duplicate detection and fusion from its offline counterpart. First, the latter assumes that the entire data is available, while the former cannot make such a hard assumption. Second, several iterations (query submissions) may be required to compute the “ideal” representation of an entity in the online setting.
We propose a general framework to address this problem: an interactive caching solution. A set of frequently requested records is cleaned off-line and cached for future references. Newly arriving records in response to a stream of queries are cleaned jointly with the records in the cache, presented to users and appended to the cache.
We introduce two online record linkage and fusion approaches: (i) a record-based and (ii) a graph-based. They chiefly differ in the way they organize data in the cache as well as computationally. We conduct a comprehensive empirical study of the two techniques with real data from the Web. We couple their analysis with commonly used cache settings: static/dynamic, cache size and eviction policies.
-
-
-
Call Admission Control with Resource Reservation for OFDM Networks
Authors: Mehdi Khabazian, Osama Kubbar and Hossam HassaneinAbstractThe scarcity of the radio resources and variable channel quality cause many challenges to the resource management for future all-IP wireless communications. One technique to guarantee a certain level of quality of service (QoS) is call admission control (CAC). Briefly, CAC is a mechanism which decides whether a new call could be admitted or rejected depending on its impacts on the current calls’ QoS. Conventional CACs such as guard channel, channel borrowing and queuing priority techniques only consider the instantaneous radio resource availabilities to make a decision on admission problem, thus they are neither able to prevent the network congestion problem nor meet the QoS requirements of different users with multi-service requirements.
In this work, we propose a new CAC technique with a future look into the needed extra resources through a reservation technique to offset the changes of the channel condition due to mobility. We show that during a call session, the needed radio resources may be increased compared with the negotiated resources during call setup. Although such fluctuations are fairly low for a single call, it is not negligible when the network is congested. As a result, some ongoing calls may experience QoS degradation. We show that such a consideration is critical in orthogonal frequency division multiplexing (OFDM) wireless networks such as 3GPP LTE where the radio resources are assigned to the users depending on the channel quality. The study assumes two types of applications denoted by wide-band and narrow-band and the performance of the proposed algorithm is modeled through queuing theory and event-driven simulation approaches. The results show that such a reservation technique improves the call admission performance significantly in terms of call blocking, call drop and call QoS degradation probabilities, and it outperforms the conventional CACs with insignificant loss in network capacity.
-
-
-
A Distributed Reconfigurable Active Solid State Drive Platform for Data Intensive Applications
Authors: Mazen Saghir, Hassan Artail, Haitham Akkary, Hazem Hajj and Mariette AwadAbstractThe ability to efficiently extract useful information from volumes of data distributed across myriad networks is hindered by the latencies inherent to magnetic storage devices and computer networks. We propose overcoming these limitations by leveraging solid-state drive (SSD) and field-programmable gate array (FPGA) technologies to process large streams of data directly at the storage sites.
Our proposed reconfigurable, active, solid-state drive (RASSD) platform consists of distributed nodes that couple SSDs with FPGAs. While SSDs store data, FPGAs implement processing elements that couple soft-core RISC processors with dynamically reconfigurable logic resources. The processors execute data-processing software drivelets, and the logic resources implement hardware for accelerating performance-critical operations. Executing appropriate drivelets and using matching hardware accelerators enables us to efficiently process streams of data stored across SSDs.
To manage the configuration of RASSD nodes and provide a transparent interface to applications, our platform also consists of distributed middleware software. Client local middleware (CLM) resides on client host machines to interpret application data processing requests, locate storage sites, and exchange data-processing requests and results with middleware servers (MWS). MWS connect to clusters of RASSD nodes and contain libraries of drivelets and accelerator configuration bit streams. An MWS loads appropriate drivelet and accelerator bit streams onto a RASSD node's FPGA, aggregates processed data, and returns it to a CLM.
To evaluate our platform, we implemented a simple system consisting of a host computer connected to a RASSD node over a peer-to-peer network. We ran a keyword search application on the host computer, which also provided middleware functionality. We then evaluated this platform under three configurations. In configuration C1, the RASSD node was only used to store data while all data was processed by the MWS running on the host computer. In configuration C2, the data was processed by a drivelet running on the RASSD node. Finally, in configuration C3, the data was processed both by a drivelet and a hardware accelerator.
Our experimental results show that C3 is 2x faster than C2, and 6x faster than C1. This demonstrates our platform's potential for enhancing the performance of data-intensive applications over current systems.
-
-
-
A Dynamic Physical Rate Adaptation for Multimedia Quality-Based Communications in IEEE_802.11 Wireless Networks
By Mariam FlissAbstractAssuming that the IEEE802.11 Wireless Local Area Networks (WLANs) are based on a radio/infrared link, they are more sensitive to the channel variations and connection ruptures. Therefore the support for multimedia applications over WLANs becomes non-convenient due to the compliance failure in term of link rate and transmission delay performance. We studied link adaptation facets and the Quality of Service (QoS) requirements essential for successful multimedia transmissions. In fact, the efficiency of rate control diagrams is linked to the fast response for channel variation. The 802.11 physical layers provide multiple transmission rates (different modulation and coding schemes). The last 802.11g-version maintains 12 physical rates up to 54 Mbps at the 2.4 GHz band. As a result, Mobile Stations (MSs) are able to select the appropriate link rate depending on the required QoS and instantaneous channel conditions to enhance the overall system performance. Hence, the implemented link adaptation algorithm symbolizes a vital fraction to achieve highest transmission capability in WLANs. “When to decrease and when to increase the transmission rate?” Are the two fundamental matters that we will be faced when designing a new physical-rate control mechanism? Many research works focus on tuning channel estimation schemes to better detect when the channel condition was improved enough to accommodate a higher rate, and then adapts its transmission rate accordingly. However, those techniques usually entail modifications on the current 802.11 standard. Another way to perform link control is based on local Acknowledgment (Ack) information for the transmitter station. Consequently, two techniques where accepted by the standard due to their efficiency and implementation simplicity.
We propose a new dynamic time-based link adaptation mechanism, called MAARF (Modified Adaptive Auto Rate Fallback). Beside the transmission frame results, the new model implements a Round Trip Time (RTT) technique to select adequately an instantaneous link rate. This proposed model is evaluated with most recent techniques adopted by the IEEE 802.11 standard: ARF (Auto Rate Fallback) and AARF (Adaptive ARF) schemes. Simulation results will be given to illustrate the link quality improvement of multimedia transmissions over Wi-Fi networks and to compare its performance with previous published results.
-
-
-
Interference Cancellation of Hop-By-Hop Beamforing for Dual-Hop MIMO Relay Networks
Authors: Fawaz AL-Qahtani and Hussein AlnuweiriAbstractCooperative communication relaying systems are gaining much interest because they can improve average link signal to nose ratio by replacing longer hops with multiple shorter hops. The method of relaying has been introduced to enable a source (i.e. mobile terminal) communicate with a target destination via a relaying (i.e. mobile terminal). Furthermore, multiple-input multiple output (MIMO) communication systems have been considered as powerful candidates for the fourth generation of wireless communication standards because they can achieve further performance improvements including an increase in the achievable spectral efficiency and the peak data rates (Multiplexing), and robustness against severe effects of fading (transmit beamforming). In this work, we consider hop-by-hop beamforming relaying system over Rayleigh fading channels. In wireless communication environments, it is well known understood that the performance of wireless networks can be limited by both fading and co-channel interference (CCI). In this work, the multiple antennas at each node of relaying systems can be used to adaptively modify the radiation pattern of the array to reduce the interference by placing nulls in the direction of the dominant interferers. In this paper, we investigate the effect of CCI on the performance of hop-by-hop beamforming relaying system. First, we derive exact closed-form expressions for the outage probability and average symbol error rates. Moreover, we look into the high signal to noise ratio (SNR) regime and study the diversity order and coding gain achieved by the system.
-
-
-
Novel Reduced-Feedback Wireless Communication Systems
Authors: Mohammad Obaidah Shaqfeh, Hussein Alnuweiri and Mohamed-Slim AlouiniAbstractModern communication systems apply channel-aware adaptive transmission techniques and dynamic resource allocation in order to exploit the peak conditions of the fading wireless links and to enable significant performance gains. However, conveying the channel state information among the users’ mobile terminals into the access points of the network consumes a significant portion of the scarce air-link resources and depletes the battery resources of the mobile terminals rapidly. Despite its evident drawbacks, the channel information feedback cannot be eliminated in modern wireless networks because blind communication technologies cannot support the ever-increasing transmission rates and high quality of experience demands of current ubiquitous services.
Developing new transmission technologies with reduced-feedback requirements is sought. Network operators will benefit from releasing the bandwidth resources reserved for the feedback communications and the clients will enjoy the extended battery life of their mobile devices. The main technical challenge is to preserve the prospected transmission rates over the network despite decreasing the channel information feedback significantly. This is a noteworthy research theme especially that there is no mature theory for feedback communication in the existing literature despite the growing number of publications about the topic in the last few years. More research efforts are needed to characterize the trade-off between the achievable rate and the required channel information and to design new reduced-feedback schemes that can be flexibly controlled based on the operator preferences. Such schemes can be then introduced into the standardization bodies for consideration in next generation broadband systems.
We have recently contributed to this field and published several journal and conference papers. We are the pioneers to propose a novel reduced-feedback opportunistic scheduling scheme that combines many desired features including fairness in resources distribution across the active terminals and distributed processing at the MAC layer level. In addition our scheme operates close to the upper capacity limits of achievable transmission rates over wireless links. We have also proposed another hybrid scheme that enables adjusting the feedback load flexibly based on rates requirements. We are currently investigating other novel ideas to design reduced-feedback communication systems.
-
-
-
Learning to Recognize Speech from a Small Number of Labeled Examples
AbstractMachine learning methods can be used to train automatic speech recognizers (ASR). When porting ASR to a new language, however, or to a new dialect of spoken Arabic, we often have too few labeled training data to allow learning of a high-precision ASR. It seems reasonable to think that unlabeled data, e.g., untranscribed television broadcasts, should be useful to train the ASR; human infants, for example, are able to learn the distinction between phonologically similar words after just one labeled training utterance. Unlabeled data tell us the marginal distribution of speech sounds, p(x), but do not tell us the association between labels and sounds, p(y|x). We propose that knowing the marginal is sufficient to rank-order all possible phoneme classification functions, before the learner has heard any labeled training examples at all. Knowing the marginal, the learner is able to compute the expected complexity (e.g., derivative of the expected log covering number) of every possible classifier function, and based on measures of complexity, it is possible to compute the expected mean-squared probable difference between training-corpus error and test-corpus error. Upon presentation of the first few labeled training examples, then, the learner simply chooses, from the rank-ordered list of possible phoneme classifiers, the first one that is reasonably compatible with the labeled examples. This talk will present formal proofs, experimental tests using stripped-down toy problems, and experimental results from English-language ASR; future work will test larger-scale implementations for ASR in the spoken dialects of Arabic.
-
-
-
Demonstration Prototype of a High-Fidelity Robotic-Assisted Suturing Simulator
Authors: Georges Younes, George Turkiyyah and Jullien Abi NahedAbstractRapid advances in robotic surgical devices have put significant pressure on physicians to learn new procedures using newer and sophisticated instruments. This in turn has increased the demand for effective and practical training methods using these technologies, and has motivated the development of surgical simulators that promise to provide practical, safe, and cost-effective environments for practicing demanding robotic-assisted procedures.
However, despite the significant interest and effort in the development of such simulators, the current state-of-art surgical simulators are lacking. They introduce significant simplifications to obtain real-time performance, and these simplifications often come at the expense of realism and fidelity. There is a need to develop and build the next-generation of surgical simulators that improve haptic and visual realism. The primary challenges for building such high-fidelity simulations for soft-tissue organ simulations come from two computationally demanding tasks that must execute in real time: managing the complexity of the geometric environment that is being dynamically modified during the procedure; and modeling the stresses and deformations of the soft tissue interacting with surgical instruments and subjected to cutting and suturing. The mechanics of soft-tissue behavior are complicated by their anisotropic and nonlinear behavior.
In this presentation, we describe an initial prototype of a robotic-assisted simulator applied to a simplified task in a prostatectomy procedure (anastomosis). The simulator demonstrates new methodologies for modeling nonlinear tissue models, integrated high-resolution geometric contact detection for handling inter- and intra-organ collisions in the dynamically changing geometric environment of the simulation, and suturing with threads. The prototype is deployed on a bi-manual haptic feedback frame and serves as a building block for simulations operating in more complex anatomical structures.
-
-
-
Investigating the Dynamics of Densely Crowded Environments at the Hajj Using Image Processing Techniques
Authors: Khurom Hussain Kiyani and Maria PetrouAbstractBackground: With the world's population projected to grow from the current 6.8 billion to around 9 billion by 2050, the resultant increase of megacities and the associated demands on public transport, there is an urgent imperative to understand the dynamics of crowded environments. Very dense crowds that exceed 3 people per square metre present many challenges for efficiently measuring quantities such as density and pedestrian trajectories. The Hajj and the associated minor Muslim pilgrimage of Umrah, present some of the most densely crowded human environments in the world, and thus present an excellent observational laboratory for the study of dense crowd dynamics. An accurate characterisation of such dense crowds cannot only improve existing models, but can also help to develop better intervention strategies for mitigating crowd disasters such as the Hajj 2006 Jamarat stampede that killed over 300 pilgrims. With Qatar set to be one of the cultural centres in the region, e.g. FIFA World Cup 2022, the proper control and management of large singular events is important for not only our safety but also our standing in the international stage.
Objectives: To use the data gathered from the Hajj to assess the dynamics of large dense crowds with a particular focus on crowd instabilities and pattern formation.
Methods: We will make use of advanced image processing and pattern recognition techniques (mathematical morphology, feature selection etc.) in assessing some of the bulk properties of crowds such as density and flow, as well as the finer details such as the ensemble of pedestrian trajectories. We are currently in the process of taking multiple wide-angle stereo videos at this year's Hajj, with our collaborators in Umm Al-Qurra University in Mecca. Multiple video capture of the same scene from different angles allows one to overcome the problem of occlusion in dense crowds.
Results: We will present our field study in the Hajj this year, where we took extensive high quality multiple camera video data. We will also present some of the techniques, which we will be using over the coming year in analyzing this large data set that we have now successfully collated.
-
-
-
Software for Biomechanical Performance Analysis of Force Plate Data
Authors: Manaf Kamil and Dino PalazziAbstractBackground: Force plates have been used in sports biomechanics since the 1960's. However, extracting useful information related to performance from curve analysis is a complicated process. It requires a combined knowledge in signal processing (filtering), mechanical equations, testing protocols, biomechanics, and discrete mathematical analysis to properly process the data.
Objectives: The aim is to provide a practical and accurate tool to analyze force curves from select standard biomechanical performance tests (e.g., counter movement jump, drop jump).
Methods The software is a tool built using Microsoft.Net framework. Key features of the software include:
* Real-time data acquisition module able to acquire data from third-party 8 channel 3D force plates with real-time results for immediate feedback during tests.
* Digital filtering module where the signal is treated for best fit for the analysis.
* Analysis module able to calculate Force, Power, Velocities, Trajectories, Mechanical Impulse and Timing during the different phases of the tests using discrete analysis algorithms.
* Reporting module for plotting and exporting selected variables.
Results The software has been used by ASPIRE Academy Sport Scientists in performance assessments of professional and semi-professional athletes from Qatar and other countries.
Currently, the software can analyze Counter Movement Jump, Drop Jump, Isometric Pulls and Squat Jump.
It contains automatic algorithms to detect specific points for each test type, but allows the user to change these suggestions when needed. Feedback is immediate, in both graphical and numerical form.
Conclusions: This novel software has proven to be a useful tool for immediate and accurate analysis and reporting of select field and lab based biomechanical tests. Going forward, further feedback from the applied users can lead to more features added. Considering the architecture of the software, adding more analysis modules is relatively simple. For example work is currently underway on a sprint running analysis module.
-
-
-
Autonomous Coverage Management in Smart Camera Networks
Authors: Vinay Kolar, Vikram Munishwar, Nael Abu-Ghazaleh and Khaled HarrasAbstractRecent advances in imaging and communication have lead to the development of smart cameras that can operate autonomously and collaboratively to meet various application requirements. Networks of such cameras, called Smart Camera Networks (SCNs), have a range of applications in areas such as monitoring and surveillance, traffic management and health care. The self-configuring nature of cameras, by adjusting their pan, tilt and zoom (PTZ) settings, coupled with wireless connectivity differentiate them substantially from classical multi-camera surveillance networks.
One of the important problems in SCNs is: “How to configure cameras to obtain the best possible coverage of events happening within the area of interest?” As the scale of cameras grows from tens to hundreds of cameras, it is impractical to rely on humans to configure cameras to best track areas of interest. Thus, supporting autonomous configuration of cameras to maximize their collective coverage is a critical requirement in SCNs.
Our research first focuses on a simplified version of the problem, where the field-of-view (FoV) of a camera can be adjusted only by adjusting its pan in discrete manner. Even with such simplifications, solving the problem optimally is NP-hard. Thus, we propose centralized, distributed and semi-centralized heuristics that outperform the state-of-the-art approaches. Furthermore, the semi-centralized approach provides coverage accuracy close to the optimal, while reducing the communication latency by 97% and 74% compared to the centralized and distributed approaches, respectively.
Next, we consider the problem without FoV constraints; we allow FoVs to be adjusted in PTZ dimensions in continuous manner. While, PTZ configurations significantly increase the coverable area, the continuous adjustment nature eliminates any sub-optimality resulting from the discrete settings. However, supporting these features typically results in generating extremely large number of feasible FoVs per camera, out of which only one optimal FoV will be selected. We show that the problem of finding minimum feasible FoVs per camera is NP-hard. However, due to the geometric constraints introduced by the camera's FoV, the problem can be solved in polynomial time. Our proposed algorithm has polynomial-time worst-case complexity of O(n3).
-
-
-
Query Processing in Private Data Outsourcing Using Anonymization
Authors: Ahmet Erhan Nergiz and Chris CliftonAbstractData outsourcing is a growing business. Cloud computing developments such as Amazon Relational Database Service promise further reduced cost. However, use of such a service can be constrained by privacy laws, requiring specialized service agreements and data protection that could reduce economies of scale and dramatically increase costs.
We propose a private data outsourcing approach where the link between identifying information and sensitive (protected) information is encrypted, with the ability to decrypt this link residing only with the client. As the server no longer has access to individually identifiable protected information, it is not subject to privacy laws, and can offer a service that does not need to be customized to the needs of each country- or sector-specific requirements; any risk of violating privacy through releasing sensitive information tied to an individual remains with the client. The data model used in this work is shown with an example in Figure 1 .
This work presents a relational query processor operating within this model. The goal is to minimize communication and client-side computation, while ensuring that the privacy constraints captured in the anatomization are maintained. At first glance, this is straightforward: standard relational query processing at the server, except that any joins involving the encrypted key must be done at the client; an appropriate distributed query optimizer should do a reasonably good job of this. However, two issues arise that confound this simple approach:
1. By making use of the anatomy groups, and the knowledge that there is a one-to-one mapping (unknown to the server) between tuples in such groups, we can perform portions of the join between identifying and sensitive information at the server without violating privacy constraints, and
2. Performing joins at the client and sending results back to the server for further processing can violate privacy constraints.
-
-
-
GreenLoc: Energy Efficient Wifi-Based Indoor Localization
Authors: Mohamed Abdellatif, Khaled Harras and Abderrahmen MtibaaAbstractUser-localization and positioning systems have been a core challenge in the domain of context-aware pervasive systems and applications. GPS has been the de-facto standard for outdoor localization; however, geo-satellite signals upon which GPS rely, are inaccurate in indoor environments. Therefore, various indoor localization techniques based on triangulation, scene analysis, or proximity, have been introduced. The most prominent technologies over which these techniques are applied include WiFi, Bluetooth, RFID, Infrared, and UWB. Due to the ubiquitous deployment of access points, WiFi-based localization via triangulation has emerged to be among the most prominent indoor positioning solutions. A major deployment obstacle for such systems, however, is the high-energy consumption rates of Wifi adapters in mobile devices where energy is the most valuable resource.
We propose GreenLoc, an indoor green localization system that exploits sensors prevalent in today's smart-phones in order to dynamically adapt the frequency of location updates required. Significant energy gains can, therefore, be acquired when users are not mobile. For example, accelerometers can aid in detecting different user states such as walking, running or stopping. Based on these states, mobile devices can dynamically decide upon the appropriate update frequency. We accommodate various motion speeds by estimating the velocity of the device using the latest two location coordinates, and the time interval between these two-recorded locations. We have taken the first steps towards implementing GreenLoc, based on the infamous Ekahau system. We have also conducted preliminary tests utilizing the accelerometer, gravity, gyroscope, and light sensors residing on the HTC Nexus One and IPhone4 smart-phones.
To further save energy in typical indoor environments, such as malls, schools, and airports, GreenLoc exploits people's proximity when moving in groups. Devices within short-range of each other do not necessarily require that they each be individually tracked. Therefore, GreenLoc detects and clusters users moving together and elects a reference node (RN) based on device energy levels and needs. The elected RN will then be tracked via triangulation while other nodes in the group will be tracked based on the RN's location using Bluetooth. Our initial analysis demonstrates very promising results with this system.
-
-
-
A Data Locality and Skew Aware Task Scheduler for MapReduce in Cloud Computing
Authors: Mohammad Hammoud, Suhail Rehman and Majd SakrAbstractInspired by the success and the increasing prevalence of MapReduce, this work proposes a novel MapReduce task scheduler. MapReduce is by far one of the most successful realizations of large-scale, data-intensive, cloud computing platforms. As compared to traditional programming models, MapReduce automatically and efficiently parallelizes computation by running multiple Map and/or Reduce tasks over distributed data across multiple machines. Hadoop, an open source implementation of MapReduce, schedules Map tasks in the vicinity of their input splits seeking diminished network traffic. However, when Hadoop schedules Reduce tasks, it neither exploits data locality nor addresses data partitioning skew inherent in many MapReduce applications. Consequently, MapReduce experiences a performance penalty and network congestion as observed in our experimental results.
Recently there has been some work concerned with leveraging data locality in Reduce task scheduling. For instance, one study suggests a locality-aware mechanism that inspects Map inputs and predicts corresponding consuming reducers. The input splits are subsequently assigned to Map tasks near the future reducers. While such a scheme addresses the problem, it targets mainly public-resource grids and doesn't fully substantiate the accuracy of the suggested prediction process. In this study, we propose Locality-Aware Skew-Aware Reduce Task Scheduler (LASAR), a practical strategy for improving MapReduce performance in clouds. LASAR attempts to schedule each reducer at its center-of-gravity node. It controllably avoids scheduling skew, a situation where some nodes receive more reducers than others, and promotes effective pseudo-asynchronous Map and Reduce phases resulting in earlier completion of submitted jobs, diminished network traffic, and better cluster utilization.
vWe implemented LASAR in Hadoop-0.20.2 and conducted extensive experimentations to evaluate its potential. We found that it outperforms current Hadoop by 11%, and by up to 26% for the utilized benchmarks. We believe LASAR is applicable to several cloud computing environments and multiple essential applications, including but not limited to shared environments and scientific applications. In fact, a large body of work observed partitioning skew in many of critical scientific applications. LASAR paves the way for these applications, and others, to get effectively ported to various clouds.
-
-
-
Multi-Layered Performance Monitoring for Cloud Computing Applications
Authors: Suhail Rehman, Mohammad Hammoud and Majd SakrAbstractCloud computing revolutionizes the way large amounts of data are processed and offers a compelling paradigm to organizations. An increasing number of data-intensive scientific applications are being ported to cloud environments such as virtualized clusters, in order to take advantage of increased cost-efficiency, flexibility, scalability, improved hardware utilization and reduced carbon footprint, among others.
However, due to the complexity of the application execution environment, routine tasks such as monitoring, performance analysis and debugging of applications deployed on the cloud become cumbersome and complex. These tasks often require close interaction and inspection of multiple layers in the application and system software stack. For example, when analyzing a distributed application that has been provisioned on a cluster of virtual machines, a researcher might need to monitor the execution of his program on the VMs, or the availability of physical resources to the VMs. This would require the researcher to use different sets of tools to collect and analyze performance data from each level.
Otus is a tool that enables resource attribution in clusters and currently reports only the virtual resource utilization and not the physical resource utilization on virtualized clusters. This is insufficient to fully understand application behavior on a cloud platform; it would fail to account for the state of the physical infrastructure, its availability or the variation in load by other VMs on the same physical host, for example.
We are extending Otus to collect metrics from multiple layers, starting with the Hypervisor. Otus can now collect information from both the VM level as well as the Hypervisor level; and this information is collected in an OpenTSDB database, which is scalable to large clusters. A web-based application allows the researcher to selectively visualize these metrics in real-time or for a particular time range in the past.
We have tested our multi-layered monitoring technique on several Hadoop Mapreduce applications and clearly identified the causes of several performance problems that would otherwise not be clear using existing methods. Apart from helping researchers understand application needs, our technique could also help accelerate the development and testing of new platforms for cloud researchers.
-
-
-
Towards a New Termination Checker for the Coq Proof Assistant
More LessAbstractModern societies rely on software applications for performing many critical tasks. As this trend is increasing, so it is the necessity to develop cost-effective methods of writing software that ensure that essential safety and security requirements are met. In this context, dependent type theories are recently gaining adoption as a valid tool for performing formal verification of software.
The focus of this work is Coq, a proof assistant based on a dependent type theory called the Calculus of Inductive Constructions. Developed at INRIA (France) for over 20 years, it is arguably one of the most successful proof assistant to this date. It has been used in several real-world large-scale projects such as the formalization of a verification framework for the Java Virtual Machine, a proof of the Four Color Theorem, and a formally verified compiler for the C programming language (project Compcert).
Coq is both a proof assistant and a programming language. To ensure soundness of the formal verification approach, Coq imposes several conditions on the source programs. In particular, all programs written in Coq must be terminating. The current implementation of the termination checker uses syntactic criteria that are too restrictive and limiting in practice, hindering the usability of the system.
In previous work we have proposed an extension of Coq using a termination checker based on the theory of sized types and we have shown that the soundness of the approach. Furthermore, compared to syntactic criteria currently used, our approach is more powerful, easier to understand, and easier to implement, as evidenced by a prototype implementation we developed.
Our objective is to turn our prototype into an implementation of a new core theory and termination checker for Coq. We expect that the resulting system will be more efficient and easier to understand for users. Furthermore it will increase the expressive power and usability of Coq, permitting the use of formal verification on a wider range of applications.
-
-
-
A Natural Language Processing-Based Active and Interactive Platform for Accessing English Language Content and Advanced Language Learning
Authors: Kemal Oflazer, Teruko Mitamura, Tomas By, Hideki Shima and Eric RieblingAbstractSmartReader is a general-purpose “reading appliance” being implemented at Carnegie Mellon University (Qatar and Pittsburgh) - building upon an earlier prototype version. It is an artificial intelligence system that employs advanced language processing technologies and can interact with the reader and respond to queries about the content, words and sentences in a text. We expect it to be used by students in Qatar and elsewhere to help improve their comprehension of English text. SmartReader is motivated by the observation that text is still the predominant medium for learning especially at the advanced level and that text, being ``bland’’, is hardly a conducive and motivating medium for learning, especially when one does not have access to tools that enable one get over language roadblocks, ranging from unknown words to unrecognized and forgotten names, to hard-to-understand sentences. SmartReader strives to make reading (English) textual material, an “active” and an “interactive” process with the user interacting with the text using anytime-anywhere contextually-guided query mechanism based-on contextual user intent recognition. With SmartReader, a user can -inquire about the contextually correct meaning or synonyms of a word or idiomatic and multi-word constructions, -select a person's name, and then get an immediate ``flashback’’ to the first (or the last) time the person was encountered in text to remind herself the details of the person, -extract a summary of a section to remember important aspects of the content at the point she left off, and continue reading with a significantly refreshed context, -select a sentence that she may not be able to understand fully and ask SmartReader to break it down, simplify or paraphrase to comprehend it better. -test her comprehension of the text in a page or a chapter, by asking SmartReader to dynamically generate quizzes and answering them. -ask questions about the content of the text and get answers in addition to many other functions. SmartReader is being implemented as a multi-platform (tablet/PC) client-server system using HTML5 technology, with Unstructured Information Management Architecture -UIMA technology (used recently in IBM's Watson Q/A system in the Jeopardy Challenge) as the underlying language processing framework.
-
-
-
NEXCEL, A Deductive Spreadsheet
More LessAbstractUsability and usefulness have made the spreadsheet one of the most successful computing applications of all times: millions rely on it every day for anything from typing grocery lists to developing multimillion dollar budgets. One thing spreadsheets are not very good at is manipulating symbolic data and helping users make decisions based on them. By tapping into recent research in logic programming, databases and cognitive psychology, we propose a deductive extension to the spreadsheet paradigm which addresses precisely this issue. The accompanying tool, which we call NEXCEL, is intended as an automated assistant for the daily reasoning and decision-making needs of computer users, in the same way as a spreadsheet application such as Microsoft Excel assists them every day with calculations simple and complex. Users without formal training in logic or computer science can interactively define logical rules in the same simple way as they define formulas in Excel. NEXCEL immediately evaluates these rules thereby returning lists of values that satisfy them, again just like with numerical formulas. The deductive component is seamlessly integrated into the traditional spreadsheet so that a user not only still has access to the usual functionalities, but is able to use them as part of the logical inference and, additionally, to embed deductive steps in a numerical calculation.
Under the hood, NEXCEL uses a small logic programming language inspired by Datalog to define derived relations: the target region of the spreadsheet contains a set of logical clauses in the same way that calculated cells contain a numerical formula in a traditional spreadsheet. Therefore, logical reasoning reduces to computing tables of data by evaluating Datalog-like definitions, a process that parallels the calculation of numerical formulas. Each row in the calculated relation is a tuple of values satisfying the definition for this relation, so that the evaluated table lists all such solutions, without repetitions. This linguistic extension significantly enriches the expressive power of the spreadsheet paradigm. Yet, it is provided to the user through a natural extension of the mostly excellent interfaces of modern spreadsheet applications.
-
-
-
Characterizing Scientific Applications on Virtualized Cloud Platforms
More LessAbstractIn general, scientific applications require different types of computing resources based on the application's behavior and needs. For example, page indexing in an Arabic search engine requires sufficient network bandwidth to process millions of web pages while seismic modeling is CPU and graphics intensive for real-time fluid analysis and 3D visualization. As a potential solution, cloud computing, with its elastic, on-demand and pay-as-you-go model, can offer a variety of virtualized compute resources to satisfy the demands of various scientific applications. Currently, deploying scientific applications onto large-scale virtualized cloud computing platforms is based on a random mapping or some rule-of-thumb developed through past experience. Such provisioning and scheduling techniques cause overload or inefficient use of the shared underlying computing resources, while delivering little to no satisfactory performance guarantees. Virtualization, a core enabling technology in cloud computing, enables the coveted flexibility and elasticity yet it introduces several difficulties with resource mapping for scientific applications.
In order to enable informed provisioning, scheduling and perform optimizations on cloud infrastructures while running scientific workloads, we propose the utilization of a profiling technique to characterize the resource need and behavior of such applications. Our approach provides a framework to characterize scientific applications based on their resource capacity needs, communication patterns, bandwidth needs, sensitivity to latency, and degree of parallelism. Although the programming model could significantly affect these parameters, we focus this initial work on characterizing applications developed using the MapReduce and Dryad programming models. We profile several applications, while varying the cloud configurations and scale of resources in order to study the particular resource needs, behavior and identify potential resources that limit performance. A manual and iterative process using a variety of representative input data sets is necessary to reach informative conclusions about the major characteristics of an application's resource needs and behavior. Using this information, we provision and configure a cloud infrastructure, given the available resources, to best target the given application. In this preliminary work, we show experimental results across a variety of applications and highlight the merit in precise application characterization in order to efficiently utilize the resources available across different applications.
-
-
-
Human-Robot Interaction in an Arabic Social and Cultural Setting
Authors: Imran Fanaswala, Maxim Makatchev, Brett Browning, Reid Simmons and Majd SakrAbstractWe have permanently deployed Hala; the world first's English and Arabic Robot Receptionist for 500+ days in an uncontrolled multi-cultural/multi-lingual environment within Carnegie Mellon Qatar.
Hala serves as a research testbed for studying the influence of socio-cultural norms and the nature of interactions during human-robot interaction within a multicultural, yet primarily ethnic Arab, setting.
Hala, as a platform, owes its uptime to several independently engineered components for modeling user interactions, syntactic and semantic language parsing, inviting users with a laser, handling facial animations, text-to-speech and lip synchronization, error handling and reporting, post dialogue analysis, networking/interprocess communication, and a rich client interface.
We conjecture that disparities in discourse, appearances, and non-verbal gestures amongst interlocutors of different cultures and native tongues. By varying Hala's behavior and responses, we gain insight into these disparities (if any) and therefore we've calculated: rate of thanks after the robot's answer amongst cultures, user willingness to answer personal questions, correlation between language and acceptance of robot invites, the duration of conversations, effectiveness of running an open-ended experiment versus surveys.
We want to understand if people communicate with a robot (rather, an inanimate object with human-like characteristics) differently than amongst themselves. Additionally, we want to extrapolate these differences/similarities while accounting for culture and language. Our results indicate that users in Qatar thanked Hala less frequently than their counterparts in the US. The robot often answered personal questions and inquiries (for ex: her marital status, job satisfaction, etc); however, only 10% of the personal questions posed by the robot were answered by users. We observed a 34% increase in interactions when the robot initiated the conversation by inviting nearby users, and the subsequent duration of the conversation also increased by 30%. Upon bringing language into the mix, we observed that native Arabic speakers were twice more likely to accept an invite from the robot and they also tended to converse for 25% longer than other cultures.
These results indicate a disparity in interaction across English and Arabic users thereby encouraging the creation of culture specific dialogues, appearances and non-verbal gestures for an engaging social robot with regionally relevant applications.
-
-
-
Overcoming Machine Tools Blindness by a Dedicated Computer Vision System
Authors: Hussien J Zughaer and Ghassan Al-KindiAbstractAlthough Computerized Numerical Controlled (CNC) machines are currently regarded as the heart of machining workshops they are still suffering from machine blindness, hence, they cannot automatically judge the performance of applied machining parameters or monitor tool wear. Therefore, parts produced on these machines may not be as precise as expected. In this research an innovative system is developed and successfully tested to improve the performance of CNC machines. The system utilizes twin-camera computer vision system. This vision system is integrated with the CNC machine controller to facilitate on-line monitoring and assessment of machined surfaces. Outcome from the monitoring and assessment task of is used to real-time control of the applied machining parameters by a developed decision making subsystem which automatically decides whether to keep or alter the employed machining parameters or to apply tool change. To facilitate the integration of computer vision with CNC machines a comprehensive system is developed to tackle a number of pinpointed issues that obstruct such integration including scene visibility issue (e.g. effects of coolant and cut chips as well as camera mounting and lighting), effects of machine vibration on the quality of obtained roughness measurement, selection of a the most proper roughness parameter to be employed, and assessment of machining parameters effects on acquired roughness measurement. Two system rigs employing different models of CNC machines are practically developed and employed in the conducted tests to beneficially generalize the findings. Two cameras are mounted on the machine spindle of each of the two employed CNC machines to provide valid image data according to the cutting direction. Proper selection and activation of relative camera is achieved automatically by the developed system which analyze the most recent conducted tool path movement to decide on which camera is to be activated. In order to assess the machining surface quality and cutting tool status, image data are processed to evaluate resulting tool imprints on the machined surface. An indicating parameter to assess resulting tool imprints is proposed and used. The overall results show the validity of the approach and encourage further development to realize wider scale applications of vision-based-CNC machines.
-
-
-
EEG - Mental Task Discrimination by Digital Signal Processing
More LessAbstractRecent advances in computer hardware and signal processing have made possible the use of EEG signals or “brain waves” for communication between humans and computers. Locked-in patients have now a way to communicate with the outside world, but even with the last modern techniques, such systems still suffer communication rates on the order of 2-3 tasks/minute. In addition, existing systems are not likely to be designed with flexibility in mind, leading to slow systems that are difficult to improve.
This Thesis is classifying different mental tasks through the use of the electroencephalogram (EEG). EEG signals from several subjects through channels (electrodes) have been studied during the performance of five mental tasks: Baseline task for which the subjects were asked to relax as much as possible, Multiplication task for which the subjects were given nontrivial multiplication problem without vocalizing or making any other movements, Letter composing task for which the subject were instructed to mentally compose a letter without vocalizing (imagine writing a letter to a friend in their head),Rotation task for which the subjects were asked to visualize a particular three-dimensional block figure being rotated about its axis, and Counting task for which the subjects were asked to imagine a blackboard and to visualize numbers being written on the board sequentially.
vvvThe work presented here maybe a part of a larger project, with a goal to classify EEG signals belonging to a varied set of mental activities in a real time Brain Computer Interface, in order to investigate the feasibility of using different mental tasks as a wide communication channel between people and computers.
-
-
-
Joint Hierarchical Modulation and Network Coding for Two Way Relay Network
Authors: Rizwan Ahmad, Mazen O. Hasna and Adnan Abu-DayyaAbstractCooperative communications has gained a lot of attention in the research community recently. This was possible due to the fact that the broadcast nature of wireless networks, which was earlier considered a drawback, can now be used to provide spatial diversity to increase throughput, reduce energy consumption and provide network resilience. The main drawback of cooperative communication is that it requires more bandwidth compared to traditional communication networks. Decode and Forward (DF) is one of the cooperative communications forwarding strategy where the relay nodes first decodes the data and then retransmit it to the destination. DF requires advanced techniques, which can be used at the intermediate relay nodes to improve spectrum utilization.
Some well-known techniques for spectrum efficiency are Network Coding (NC) and Hierarchical Modulation (HM). In NC technique, nodes in a network are capable of combining packets for a transmission, thus reducing number of transmissions. HM is a technique which allows transmission of multiple data streams simultaneously. Both HM and NC are useful techniques for spectral efficiency.
In this work, we evaluate the performance of a joint HM and NC scheme for two-way relay networks. The relaying is based on the signal-to-noise (SNR) threshold at relay. In particular, a two way cooperative network with two sources and one relay is considered as shown in fig 1. Two different protection classes are modulated by a hierarchical 4/16 - Quadrature Amplitude Modulation (QAM) constellation at the source. Based on the instantaneous received SNR at the relay, the relay decides to retransmit both classes by using a hierarchical 4/16 - QAM constellation, or the more-protection class by using a Quadrature Phase Shift Keying (QPSK) constellation, or remains silent. These thresholds at the relay give rise to multiple transmission scenarios in a two-way cooperative network. Simulation results are provided to verify the analysis.
-
-
-
Repairing Access Control Configurations via Answer Set Programming
Authors: Khaled Mohammed Khan and Jinwei HuAbstractAlthough various access control models have been proposed, access control configurations are error prone. There is no assurance of the correctness of access control configurations. When we find errors in an access control configuration, we take immediate actions to repair the configuration. The repairing is difficult, largely because arbitrary changes to the configuration may result in no less threat than errors do. In other words, constraints are placed on the repaired configuration. The presence of constraints makes a manual error-and-trial approach less attractive. There are two main shortcomings with the manual approach. Firstly, it is not clear whether the objectives are reachable at all; if not, we waste time trying to repair an error prone configuration. Secondly, we have no knowledge of the quality of the solution such as correctness of the repair.
In order to address these shortcomings, we aim to develop an automated approach to the repairing task of access control configurations. We have utilized answer set programming (ASP), a declarative knowledge representation paradigm, to support such an automated approach. The rich modeling language of ASP enables us to capture and express the repairing objectives and the constraints. In our approach, the repairing instance is translated into an ASP, and the ASP solvers are invoked to evaluate it.
Although the applications of ASP follow the general “encode-compute-extract” approach, they differ in the representations of the problems in ASP. In our case, there are two principal factors which render the proposed problem and approach non-trivial. Firstly, we need to identify constraints which are not only amendable to ASP interpretation, but also expressive enough to capture common idioms of security and business requirements; there is a trade-off to make. Secondly, our ASP programs should model the quality measure of repairs—when more than one repair is possible, the reported one is optimized in terms of the quality measure. We have also undertaken extensive experiments on both real-world and synthetic data-sets. The experiment results validate the effectiveness and efficiency of the automated approach.
-
-
-
A Security Profile-based Assurance Framework for Service Software
More LessAbstractA service software is a self-contained, modular application deployed over standard computing platforms, and readily accessible by users within or across organizational boundaries using Internet. For businesses to open up their applications for the interaction with other service software, a fundamental requirement is that there has to be sufficient choices for security provisions allowing service consumers to select and verify the actual security assurances of services. In this context, the specific research challenge is how we could design service software focusing on the consumer's specific security requirements, and provide assurances to those security needs. Clearly, the security requirements vary from consumers to consumers. This work outlines a framework focusing on the selection of service software consistent with the security requirements of consumers, and compatibility checking of the assurances provided by services. We use profile-based compatibility analysis techniques to form an essential building block towards assuring security of service software.
In our research, we envision a security profile based compatibility checking that focuses more on automatic analysis of security compatibility using formal analysis techniques of security properties of software services. Our approach is based on three main building blocks: reflection of security assurances; selection of preferred assurances; and checking of security compatibility. Clearly, our vision and research for service security based on profile based compatibility analysis will form an essential building block towards realizing the full potential of service oriented computing. We foresee that the provision of the proposed scheme for service security profiling and compatibility analysis will significantly advance the state of practice in service oriented computing. At the same time, its development represents a new and highly challenging research target in the area.
This work is of great significance to the development of future software systems that facilitate security-aware cross-organizational business activities. The envisioned capability to integrate service software across-organizational boundaries that meets security requirements of all parties involved represents a significant technological advance in enabling practical business-to-business computing, leading to new business opportunities. At the same time, the approach will make significant scientific advancement in understanding the problem of application-level system security in a service oriented computing context.
-
-
-
Hierarchical Clustering for Keyphrase Extraction from Arabic Documents Based on Word Context
Authors: Rehab Duwairi, Fadila Berzou and Souad MecheterAbstractKeyphrase extraction is a process by which the set of words or phrases that best describe a document is specified. The phrases could be extracted from the document words itself, or they could be external and specified from an ontology for a given domain. Extracting keyphrases from documents is critical for many applications such as information retrieval, document summarization or clustering. Many keyphrase extractors view the problem as a classification problem and therefore they need training documents (i.e. documents which their keyphrases are known in advance). Other systems view keyphrase extraction as a ranking problem. In the latter approach, the words or phrases of a document are ranked based on their importance and phrases with high importance (usually located at the beginning of the list) are recommended as possible keyphrases for a document.
This abstract explains Shihab; a system for extracting keyphrases from Arabic documents. Shihab views keyphrase extraction as a ranking problem. The list of keyphrases is generated by clustering the phrases of a document. Phrases are built from words which appear in the document. These phrases consist of 1-, 2- or 3-words. The idea is to group phrases which are similar into one cluster. The similarity between phrases is determined by calculating the Dice value of their corresponding contexts. A phrase context is the sentence in which that phrase appears. Agglomerative hierarchical clustering is used in the clustering phase. Once the clusters are ready, then each cluster will nominate a phrase to the set of candidate keyphrases. This phrase is called cluster representative and is determined according to a set of heuristics. Shihab results were compared with other existing keyphrase extractors such as KP-Miner and Arabic-KEA and the results were encouraging.
-
-
-
On the Design of Learning Games and Puzzles for Children with Intellectual Disability
Authors: Aws Yousif Fida El-Din and Jihad Mohamed AljaamAbstractThe objective of this paper is to present the edutainment learning games that we are developing for Qatari children with moderate intellectual disability. These games will help them to learn effectively in funny and enjoyable ways. We use multimedia technology merged with intelligent algorithms to guide the children in play. As the number of children with intellectual disability is increasing, an early intervention to teach them properly using information technology is very important. Few research projects on disability are being conducted in the Arab world however, and these projects are still not enough to respond to the real needs of the disabled and achieve satisfactory outcomes. Developing edutainment games for children with intellectual disability is a very challenging task. First, it requires content developed by specialized instructors. Second, the interface design of the games must be presented clearly and be easy to interact with. Third, the games must run slowly, in order to give the children some time to think and interact. Fourth, regardless of the results, the game must allow a minimum level of general satisfaction, to avoid depressing the children. Fifth, the game must make maximum use of multimedia elements to draw the attention of the children. We show some multimedia applications for children with different disabilities, which were developed in Qatar University (enhancing mathematics skills with symbolic gift reward in case of guessing the answer). This feature motivated the children to play the game several times in the day. The applications also used some videos to illustrate the game before they play it. The purpose of the second multimedia application is to test the children's memory. The application uses different multimedia elements to present different games, which requires deep concentration in order to guess the answer. These games helped the children develop a strong sense of self-confidence. Learning puzzles that we have developed were based on intelligent algorithms to avoid cycling and which allow the children to reach a solution. Two different approaches were used: Simulated Annealing and Tabu Search.
-
-
-
Minimal Generators Based Algorithm for Text Features Extraction: A More Efficient and Large Scale Approach
Authors: Samir Elloumi, Fethi Fejani, Sadok Ben Yahia and Ali JaouaAbstractIn the recent years, several mathematical concepts were successfully explored in computer science domain, as basis for finding original solutions for complex problems related to knowledge engineering, data mining, information retrieval, etc.
Thus, Relational Algebra (RA) and Formal Concept Analysis (FCA) may be considered as useful mathematical foundations that unified data and knowledge in information retrieval systems. As for example, some elements in a fringe relation (related to the RA domain) called isolated points were successfully of use in FCA as formal concept labels or composed labels. Once associated to words, in a textual document, these labels constitute relevant features of a text. Here, we propose the GenCoverage algorithm for covering a Formal Context (as a formal representation of a text) based on isolated labels and we use these labels (or text features) for categorization, corpus structuring and micro-macro browsing as an advanced functionality in the information retrieval task.
The main thrust of the introduced approach heavily relies on the snugness connection between isolated points and minimal generators (MGs). MGs stand at the antipodes of the closures within their respective equivalence classes. Relying on the fact the minimal generators are the smallest elements within an equivalence class, so their detection/traversal is largely eased and permits a swift building of the coverage. Thorough carried out experiments provide empirical evidences about the performances of our approach.
-
-
-
Preserving Privacy from Unsafe Data Correlation
Authors: Bechara Al Bouna, Christopher Clifton and Qutaibah MalluhiAbstractWith the emergence of cloud computing, providing safe data outsourcing has become an active topic. Several regulations have been issued to foresee that individual and corporate information would be kept private in a cloud computing environment. To guarantee that these regulations are fully maintained, the research community proposed new privacy constraints such as k-anonymity, l-diversity and t-closeness. These constraints are based on generalization which, transforms identifying attribute values into a general form and partitions to eliminate possible linking attacks. Despite their efficiency, generalization techniques affect severely the quality of outsourced data and their correlation. To cope with such defects, Anatomy has been proposed. Anatomy releases quasi-identifiers values and sensitive values into separate tables which, essentially preserve privacy and at the same time capture large amount of data correlation. However, there are situations where data correlation could lead to an unintended leak of information. In this example, if an adversary knows that patient Roan (P1) takes a regular drug, the join of Prescription (QIT) and Prescription (SNT) on the attribute GID leads to the association of RetinoicAcid to patient P1 due to correlation.
In this paper, we present a study to counter privacy violation due to data correlation and at the same time improve aggregate analysis. We show that privacy requirements affect table decomposition based on what we call correlation dependencies. We propose a safe grouping principle to ensure that correlated values are grouped together in unique partitions that obey to l-diversity and at the same time preserve the correlation. An optimization strategy is designed as well to reduce the number of anonymized tuples. Finally, we extended the UTD Anonymization Toolbox to implement the proposed algorithm and demonstrate its efficiency.
-
-
-
News Alerts Trigger System to Support Business Owners
Authors: Jihad Mohamad Aljaam, Khaled Sohil Alsaeed and Ali Mohamad JaouaAbstractThe exponential growth of financial news coming from different sources makes getting effective benefit from them very hard. Business decision makers who reply to these news, are unable to follow them accurately in real time. They need always to be alerted immediately for any potential financial events that may affect their businesses whenever they occur. Many important news can simply be embedded into thousands of lines of financial data and cannot be detected easily. Such kind of news may have sometimes a major impact on businesses and the key decision makers should be alerted about them. In this work, we propose an alert system that screens structured financial news and trigger alerts based on the users profiles. These alerts have different priority levels: low, medium and high. Whenever the alert priority level is high, a quick intervention should be taken to avoid potential risks on businesses. Such events are considered as tasks and should be treated immediately. Matching users profiles with news events can sometimes be straightforward. It can also be challenging especially whenever the keywords in the users profiles are just synonyms of the events keywords. In addition, alerts can sometimes be dependable on the combination of correlated news events coming from different sources of information. This makes their detection a computationally challenging problem. Our system allows the user to define their profiles in three different ways: (1) selecting from a list of keywords that are related to events properties; (2) providing free keywords; and (3) entering simple short sentences. The system triggers alerts immediately whenever some news events related to the users profiles occur. It takes into consideration the correlated events and the concordance of the events keywords with the synonymous of the users profiles keywords. The system uses the vector space model to match keywords with the news events words. As consequence, the rate of false-positive alerts is low whereas the rate of false-negative alerts is relatively high. However, enriching the dictionary of synonym words would reduce the false-negative alerts rate.
-
-
-
Assistive Technology for People with Hearing and Speaking Disabilities
AbstractThe community with Hearing or Speaking Disabilities represents a significant component of the society that needs to be well integrated in order to foster great advancements through leveraging all contributions of every member in the society. When those people cannot read lips they usually need interpreters to help them communicate with people who do not know sign language, and they also need an interpreter when they use phones, because the communication will not be done easily if they are not using special aiding devices, like a Relay Service or Instant Messaging (IM). As the number of people with hearing and speaking disabilities is increasing significantly; building bridges of communications between deaf and hearing community is essential, to deepen the mutual cooperation in all aspects of life. The problem could be summarized in one question: How to construct this bridge to allow people with hearing and speaking difficulties communicate?
This project suggests an innovative framework that contributes to the efficient integration of people with hearing disabilities with the society by using wireless communication and mobile technology. This project is completely operator independent unlike the existing solutions (Captel and Relay Service), it depends on an extremely powerful Automatic Speech Recognition and Processing Server (ASRPS) that can process speech and transform it into text. Independent means, it recognizes the voice regardless of the speaker and the characteristics of his/ her voice. On the other side there will be a Text To Speech (TTS) engine, which will take the text sent to the application server and transmit it as speech. The second aim of the project is to develop an iPhone/iPad application for the hearing impaired. The application facilitates the reading of the received text by converting it into sign language animations, which are constructed from a database; we are currently using American Sign Language for its simplicity. Nevertheless, the application can be further developed in other languages like Arabic sign language and British sign language. The application also assists the writing process by developing a customized user interface for the deaf to communicate efficiently with others that includes a customized keyboard.
-
-
-
Using Cognitive Dimensions Framework to Evaluate Constraint Diagrams
Authors: Noora Fetais and Peter ChengAbstractThe Cognitive Dimensions of Notations are a heuristic framework created by Thomas Green for analysing the usability of notational systems. Microsoft used this framework as a vocabulary for evaluating the usability of their C# and .NET development tools. In this research we used this framework to compare the evaluation of the Constraint Diagrams and the evaluation of the Natural Language by running a usability study. The result of this study will help in determining if users would be able to use constraint diagrams to accomplish a set of tasks. From this study we can predicate difficulties that may be faced when working on these tasks. Two steps were required. The first step is to decide what generic activities a system is desired to support. An Activity is described at a rather abstract level in terms of the structure of information and constraints on the notational environment. Cognitive dimensions constitute method to theoretically evaluate the usability of a system. Its dimensional checklist approach is used to improve different aspects of the system. Each improvement will be associated with a trade-off cost on other aspects. Each generic activity has its own requirements in terms of cognitive dimensions, so the second step is to scrutinize the system and determine how it lies on each dimension. If the two profiles match, all is well. Every dimension should be described with illustrative examples, case studies, and associated advice for designers. In general, an activity such as exploratory design where software designers make changes at different levels is the most demanding activity. This means that dimensions such as viscosity and premature-commitment must be low while visibility and role-expressiveness must be high.
-
-
-
IT System for Improving Capability and Motivation of Workers: An Applied Study with Reference to Qatar
Authors: Hussein Alsalemi and Pam MayhewAbstractInformation Systems (IS) is a discipline that has its roots in the field of organizational development. Information Technology (IT) is a primary tool that has been used by IS to support the aim of developing organisations. However, IT has focused on supporting two main areas in organisations to help them become better at achieving their goals.These two areas are: operational and strategic. Little research has been devised to support the Human (workforce) for the aim of improving organization’s abilities to achieve their goals. In IS the Socio-Technical Theory is one theory that researches approaches to improve employees' satisfaction for the sake of better work productivity and business value. This theory is based on the idea of achieving harmonious integration of different subsystems in an organization (social, technical and environmental subsystems).
The aim of this research is to find out if IT can be used to improve the capability and motivation of the workforce in organisations so that these organizations can better achieve their goals.
This research is an applied study with reference to the Qatar National Vision 2030(QNV2030) Human Pillar. Using grounded theory (GTH) research methodology, the research characterized the main factors that affect capability and motivation of the Qatari workforce. These findings were used to develop the theoretical model. This model identifies core factors, gives a description of each factor and explains interactions between them. The theoretical model was then transformed into an IT system consisting of a number of IT tools, which was then tested in different organizations in Qatar. The test was to evaluate its effectiveness in improving the capabilities and motivation of Qatari workforce and to explore its effectiveness in helping these organizations better achieve their goals.
The research concluded that the developed IT system based on the theoretical model can help in improving motivation and capability of a workforce providing a defined set of organizational and leadership qualities exist within the organisation.
-
-
-
Utilization of Mixtures as Working Fluids in Organic Rankine Cycle
Authors: Mirko Stijepovic, Patrick Linke, Hania Albatrni, Rym Kanes, Umaira Nisa, Huda Hamza, Ishrath Shiraz and Sally El MeragawiAbstractOver the past several years ORC processes have become very promising for power production from low grade heat sources: solar, biomass, geothermal and waste heat. The key challenge in the design process is the selection of an appropriate working fluid. A large number of authors used pure components as working fluid, and assess ORC performance.
ORC systems that use single working fluid component have two major shortcomings. First, the majority of applications involve temperatures of the heat sink and source fluid varying during the heat transfer process, whereas the temperatures of the working fluid during evaporation and condensing remains constant. As a consequence a pinch point is encountered in the evaporator and condenser, giving rise to large temperature differences at one end of the heat exchanger. This leads to irreversibility that in turn reduces process efficiency. A similar situation is also encountered in the condenser. A second shortcoming of the Rankine cycle is lack of flexibility.
These shortcomings result from a mismatch between thermodynamic properties of pure working fluids, the requirements imposed by the Rankine cycle and the particular application. In contrast, when working fluid mixtures are used instead of single component working fluids, improvements can be obtained in two ways, through inherent properties of the mixture itself, and through cycle variations which, become available with mixtures. The most obvious positive effect is decrease in energy destruction, since the occurrence of a temperature glide during a phase change provides a good match of temperature profiles in the condenser and evaporator.
This paper presents detailed simulations and economic analyses of Organic Rankine Cycle processes for energy conversion of low heat sources. The paper explores the effect of mixture utilization on common ORC performance assessment criteria in order to demonstrate advantages of employing mixtures as working fluid as compared to pure fluids. We illustrate these effects based on of zeotropic mixtures of paraffins, as ORC working fluids.
-
-
-
Application of Nanotechnology in Hybrid Solar Cells
Authors: Narendra Kumar Agnihotra and S SakthivelAbstractPlastic/organic /polymer photovoltaic solar cells are fourth generation cells however the efficiency, thermal stability and cost of fourth generation solar cells are still not sufficient to replace conventional solar cells. Hybrid solar cells have been one of the alternate technologies to harness solar power into electrical power to overcome the high cost of conventional solar cells. This review paper has focused on the concept of hybrid solar cells with the combination of organic/polymer materials, blended with inorganic semiconducting materials. The paper presents the importance of nanoscale materials and its shape and size, nanotubes, nanowire, nanocrystal, which can increase the efficiency of the solar cells. The study shows that nanomaterials have immense potential and application of nanomaterials (inorganic/organic/polymer) can improve the performance of photovoltaic solar cells. Tuning of nanomaterials increase the functionality, band gap, optical absorption and shape of the materials, in multiple orders compared to micro scale materials. Hybrid solar cells have unique properties of inorganic semiconductors along with the film forming properties of conjugated polymers. Hybrid materials have great potential because of their unique properties and are showing great results at the preliminary stages of research. The advantage of organic/polymer is easy processing; roll to roll production, lighter weight, flexible shape and size of the solar cells. Application of nanotechnology in hybrid solar cells has opened the door to manufacturing of a new class of high performance devices.
-
-
-
Optimized Energy Efficient Content Distribution Over Wireless Networks with Mobile-to-Mobile Cooperation
Authors: Elias Yaacoub, Fethi Filali and Adnan Abu-DayyaAbstractMajor challenges towards the development of next generation 4G wireless networks include fulfilling the foreseeable increase in power demand of future mobile terminals (MTs) in addition to meeting the high throughput and low latency requirements of emerging multimedia services. Studies show that the high-energy consumption of battery operated MTs will be one of the main limiting factors for future wireless communication systems. Emerging multimedia applications require the MTs’ wireless interfaces to be active for long periods while downloading large data sizes. This leads to draining the power of the batteries.
The evolution of MTs with multiple wireless interfaces helps to deal with this problem. This results in a heterogeneous network architecture with MTs that actively use two wireless interfaces: one to communicate with the base station (BS) or access point over a long-range (LR) wireless technology (e.g., UMTS/HSPA, WiMAX, or LTE) and one to communicate with other MTs over a short-range (SR) wireless technology (e.g., Bluetooth or WLAN). Cooperative wireless networks proved to have a lot of advantages in terms of increasing the network throughput, decreasing the file download time, and decreasing energy consumption at MTs due to the use of SR mobile-to-mobile collaboration (M2M). However, the studies in the literature apply only to specific wireless technologies in specific scenarios and do not investigate optimal strategies.
In this work, we consider energy minimization in content distribution with M2M collaboration and derive the optimal solution in a general setup with different wireless technologies on the LR and SR. Scenarios with multicasting and unicasting are investigated. Content distribution delay is also analyzed. Practical implementation aspects of the cooperative techniques are studied and different methods are proposed to overcome the practical limitations of the optimal solution. Simulation results with different technologies on the LR and SR are generated, showing significant superiority of the proposed techniques. Ongoing work is focusing on incorporating quality of service constraints in the energy minimization problem and in designing a testbed validating the proposed methods.
-
-
-
Design of Novel Gas-to-Liquid Reactor Technology Utilizing Fundamental and Applied Research Approaches
Authors: Nimir Elbashir, Aswani Mogalicherla and Elfatih ElmalikAbstractThis research work is in line with the State of Qatar's aspiration on becoming the “gas capital of the world”, as it focuses on developing cost effective Gas-to- Liquid (GTL) technologies via Fischer-Tropsch synthesis (FTS). The objective of our present research activities is developing a novel approach to the FTS reactor design through controlling the thermo-physical characteristics of the reaction media by the introduction of a supercritical solvent.
The research is facilitated by QNRF through the flagship National Priorities Research Program, highlighting the importance of the subject matter to Qatar. It is a multidisciplinary consortium comprising of highly specialized teams of foremost scientists in their fields from four universities.
FTS is the focal process in which natural gas is converted to ultra-clean liquid based fuels; it is a highly complex chemical reaction where synthesis gas (a mixture of Hydrogen & Carbon Monoxide) enters the reactor and propagates to various hydrocarbons over a metallic based catalyst. Many factors impede the current commercial FTS technologies, chiefly transport and thermal limitations due to the nature of the phase of operation (either liquid or gas phase classically). Interestingly, the most advanced FTS technologies that employ either the liquid phase or gas phase are currently in operation in Qatar (Shell's Pearl project and Sasol's Oryx GTL plant).
This project is concerned with the design of an FTS reactor to be operated under supercritical fluid conditions in order to leverage certain advantages over the aforementioned commercial technologies. The conception of designing this novel reactor is based on phase behavior and kinetic studies of the non-ideal SCF media in addition to a series of process integration and optimization studies coupled with the development of sophisticated dynamic control systems. These results are currently under use at TAMUQ to build a bench-scale reactor to verify simulation studies.
To date, our collective research has yielded 8 peer-reviewed publications, more than 8 conference papers and proceedings, as well as numerous presentations in international conferences. It is noteworthy to mention that an advisory board composed of experts from the world leading energy companies follows the progress of this project toward its ultimate goal.
-
-
-
Hierarchical Cellular Structures with Tailorable Proparties
Authors: Abdel Magid Hamouda, Amin Ajdari, Babak Haghpanah Jahromi and Ashkan VaziriAbstractHierarchical structures are found in many natural and man-made materials [1]. This structural hierarchy play an important role in determining the overall mechanical behavior of the structure. It has been suggested that increasing the hierarchical level of a structure will result in a better performing structure [2]. Besides, honeycombs are well known structures for lightweight and high strength applications [3]. In this work, we have studied the mechanical properties of honeycombs with hierarchical organization using theoretical, numerical, and experimental methods. The hierarchical organization is made by replacing the edges of a regular honeycomb structure with smaller regular honeycomb. Our results showed that honeycombs with structural hierarchy have superior properties compared to regular honeycombs. The results show that a relatively broad range of elastic properties, and thus behavior, can be achieved by tailoring the structural organization of hierarchical honeycombs, and more specifically the two dimension ratios. Increasing the level of hierarchy provides a wider range of achievable properties. Further optimization should be possible by also varying the thickness of the hierarchically introduced cell walls, and thus the relative distribution of the mass, between different hierarchy levels. These hierarchical honeycombs can be used in development of novel lightweight multifunctional structures, for example as the cores of sandwich panels, or development of lightweight deployable energy systems.
-