- Home
- Conference Proceedings
- Qatar Foundation Annual Research Forum Proceedings
- Conference Proceeding
Qatar Foundation Annual Research Forum Volume 2012 Issue 1
- Conference date: 21-23 Oct 2012
- Location: Qatar National Convention Center (QNCC), Doha, Qatar
- Volume number: 2012
- Published: 01 October 2012
261 - 280 of 469 results
-
-
Non-invasive multiple camera calibration in highly crowded environments
Authors: Emanuel Aldea, Khurom Kiyani and Maria PetrouBackground: Very dense crowds that exceed three people per square metre present many challenges in computer vision for efficiently measuring quantities such as density and pedestrian trajectories. An accurate characterisation of such dense crowds can improve existing models and help to develop better strategies for mitigating crowd disasters. Pedestrian models used for tracking are often based on assumptions which are no longer valid in highly dense crowds, e.g. absence of occlusions. Recently, multiple camera systems with partially overlapping fields of view have been shown to offer critical advantages over other non-overlapping schemes. Objectives: We focus on overcoming the non-invasive aspect of the camera calibration, imposed by real crowded environments needed in order to accurately segment individuals. We will also underline the interdisciplinary challenges related to computer vision and real-time processing. Methods: Although there is an important amount of work devoted to multiple camera calibration, the automation of the process in specific environments remains challenging. In dense crowds, such as in Mecca, access to the site for calibration purposes or for adding support visual features is impossible. Moreover, the supervising role of the user for calibration scenarios is very important, and cumbersome for large camera networks. We investigate a light solution based on a coarse-to-fine estimation of the camera positions using both static and dynamic features. This highlights the necessary tradeoff between the crowd coverage, the purpose of the experiment, and the static feature distribution which is required to register the camera system properly. A more practical aspect that we underline is related to the importance of accurate time synchronization within the system in the presence of a dynamic scene. Results: We present a pilot study of the above scheme conducted at Regents Park Mosque in London on Friday when the mosque is particularly crowded. We have set up a distributed system of accurately synchronized Firewire cameras, acquiring high-resolution data at 8Hz. We will also aim to present some preliminary single camera studies of crowd flow using real-world data from the Muslim Hajj pilgrimage from 2011.
-
-
-
A flexible and concurrent MapReduce programming model for shared-data applications
Authors: Fan Zhang and Qutaibah M MalluhiThe rapid growth of large data processing has meant the implementation of the MapReduce programming model as a widely accepted solution. The simple map and reduce stages have introduced convenience to programmers in order that they may quickly compose and design complex solutions for large-scale problems. Due to the ever-increasing complexity of execution logic in real-life applications, more and more MapReduce applications involve multiple correlated jobs encapsulated and executed according to a defined order. For example, a PageRank job involves two iterative MapReduce sub-jobs; the first job joins the rank and linkage table and the second one calculates the aggregated rank of each URL. Two non-iterative MapReduce sub-jobs for counting out-going URLs and assigning initial ranks are also included in the PageRank job. Besides this, MapReduce programming model lacks built-in support and optimization when the input data are shared. The performance will benefit when the shared and frequently accessed files are read from local instead of from distributed file system. This paper presents Concurrent MapReduce; a new programming model built on top of MapReduce while maintaining optimization and scheduling for big data applications that are composed of large number of shared data items. Concurrent MapReduce has three major characteristics: (1) Unlike traditional homogeneous map and reduce functions, it provides a flexible framework, which supports and manages multiple yet heterogeneous map and reduce functions. In other words, programmers are able to write many different map and reduce functions in a MapReduce job. (2) It launches multiple jobs in a task-level concurrency and a job-level concurrency manner. For job-level concurrency, the framework manages the shared data by replicating them from HDFS to the local file system to ensure data locality. For task-level concurrency, it is the programmers' responsibility to define the data to be shared. (3) We have evaluated the framework using two benchmarks: Single Source Shortest Path and String Matching. Results have demonstrated up to 4X performance speedup compared to traditional non-concurrent MapReduce.
-
-
-
Track me to track us: Leveraging short range wireless technologies for enabling energy efficient Wi-Fi-based localization
Authors: Mohamed Abdellatif, Abderrahmen Mtibaa and Khaled HarrasGiven the success of outdoor tracking via GPS and the rise of real-time context-aware services, users will soon rely on applications that require higher granularity indoor localization. This need is further manifested in countries like Qatar, where various social and business activities occur indoors. Wi-Fi-based indoor localization is one of the most researched techniques due to its ubiquitous deployment and acceptable accuracy for a wide range of applications. However, we do not witness such techniques widely deployed mainly due to their high demand on energy, which is a precious commodity in mobile devices. We propose an energy-efficient indoor localization system that leverages peoples' typical group mobility patterns and short-range wireless technologies available on their devices. Our system architecture, shown in the figure, is designed to be easily integrated with existing Wi-Fi localization engines. We first utilize low-energy wireless technologies, such as Bluetooth, to detect and cluster individuals moving together. Our system then assigns a group representative to act as a designated cluster head that would be constantly tracked. The location of other group members can be inferred so long as they remain within proximity of the cluster heads. Afterwards, cluster heads continue to send the periodic received signal strength indicator (RSSI) updates, while the remaining members turn off their Wi-Fi interface relying on the cluster head to be localized. Our system is responsible for dynamically handling the merger or splitting of clusters as a result of mobility. We implement a prototype of the system, and evaluate it at scale using the QualNet simulator. Our results show that we can achieve up to 55% energy reduction with a relatively small degradation in localization accuracy averaging 2 meters. This accuracy reduction is non-impactful given the typical applications expected to leverage our system.
-
-
-
Non-destructive visual pipe mapping for inspection
Authors: Peter Hansen, Brett Browning, Peter Rander and Hatem AlismailPipe inspection is a critical process in many industries, including oil and gas. Conventional practice relies on a range of Non-Destructive Testing (NDT) approaches such as ultrasonic and magnetic flux leakage methods. While these approaches can provide high accuracy wall thickness measurements, which can be used to monitor corrosion, they provide poor visualizations, and are typically unable to provide full pipe coverage. Moreover, they cannot be used to localize where in the pipe a defect is without expensive and possibly restricted sensors such as Inertial Navigation Systems. We have developed an automated vision-based approach that builds high-resolution 3D appearance maps of pipes and provides vehicle localization. These maps include both structure and appearance information, and can be used for direct metric measurement of pipe wall thickness, or as input to automatic corrosion detection algorithms. They may also be imported into 3D rendering engines to provide effective visualization of a pipe network. Our most recent system uses a wide angle of view fisheye camera which enables full pipe coverage and is sufficiently compact for practical applications. Our approach to mapping and localization builds from state-of-the-art visual odometry methods and extends them to deal with (visually) feature poor engineered environments. We present the results of this work using image datasets collected within our constructed pipe network. A range of empirical results are presented to validate the approach.
-
-
-
Computational and statistical challenges with high dimensionality: A new method and efficient algorithm for feature selection in knowledge discovery
Authors: Mohammed El Anbari and Halima BensmailQatar is currently building one of the largest research infrastructures in the Middle East. In this orientation, Qatar foundation has constructed a number of universities and institutes composed of highly qualified researchers. In particular, QCRI institute is forming a scientific computing multidisciplinary group with a special interest in machine learning, data mining and bioinformatics. We are now able to address the computational and statistical needs of a variety of researchers with a vital set of services contributing to the development of Qatar. The availability of massive amounts of data and challenges from frontiers of research and development have reshaped statistical thinking, data analysis and theoretical studies. There is little doubt that high-dimensional data analysis will be the most important research topic in statistics in the 21st century. Indeed, the challenges of high-dimensionality arise in diverse fields of sciences, engineering, and humanities, ranging from genomics and health sciences to economics, finance, and machine learning and data mining. For example, in biomedical studies, huge numbers of magnetic resonance images (MRI) and functional MRI data are collected for each subject with hundreds of subjects involved. Satellite imagery has been used in natural resource discovery and agriculture, collecting thousands of high resolution images. Other examples of these kinds are plentiful in computational biology, climatology, geology and neurology among others. In all of these fields, variable selection and feature extraction are crucial for knowledge discovery. In this paper, we propose a computationally intensive method for regularization and variable selection in linear models. The method is based on penalized least squares with a penalty function that is a combination of the minimum concave penalty (MCP) and an L2 penalty on successive differences between coefficients. We call it the SF-MCP method. Extensive simulation studies and applications to large biomedical datasets (leukemia and glioblastoma cancers, diabetes, proteomics and metabolomics data sets) show that our approach outperforms its competitors in terms of prediction of errors and identification of relevant genes that are responsible of some lethal diseases.
-
-
-
Hybrid pronunciation modeling for Arabic large vocabulary speech recognition
Authors: Mohamed Elmahdy, Mark Hasegawa-Johnson and Eiman MustafawiArabic is a morphologically rich language. This morphological complexity results in a high out-of-vocabulary rate. That is why a lookup table for pronunciation modeling is not efficient for large vocabulary tasks. In previous research, graphemic modeling was proposed by approximating pronunciation modeling to be graphemes rather than actual phonemes. In this research, we have proposed a hybrid acoustic and pronunciation modeling approach for Arabic large vocabulary speech recognition tasks. The proposed approach benefits from both phonemic and graphemic modeling techniques, where two acoustic models are fused together. The hybrid approach also benefits from both vocalized and non-vocalized Arabic resources, which is useful because the amount of non-vocalized resources is always higher than vocalized ones. Two speech recognition baseline systems were built: phonemic and graphemic. The two baseline acoustic models were combined after two independent trainings to create a hybrid model. Pronunciation modeling was also hybrid by generating graphemic variants in addition to phonemic variants. Three techniques are proposed for pronunciation modeling: Hybrid-Or, Hybrid-And, and Hybrid-Top(n). In Hybrid-Or, either graphemic or phonemic modeling is applied for any given word. In Hybrid-And, a graphemic pronunciation is always generated in addition to existing phonemic pronunciations. Hybrid-Top(n) is a mixture of Hybrid-Or and Hybrid-And by applying Hybrid-Or on the top n high frequency words. Experiments were conducted in the large vocabulary news broadcast speech domain with a vocabulary size of 250K. The proposed hybrid approach has shown a relative reduction in WER of 8.8% to 12.6% depending on pronunciation modeling settings and the supervision in the baseline systems. In large vocabulary speech domains, acoustic and pronunciation modeling is a common problem among all Arabic colloquial varieties. Thus, for future work, the proposed approach is currently being extended and evaluated with different Arabic colloquial varieties (e.g. Qatari, Egyptian, Levantine, etc.). Moreover, the proposed technique can be applied with other morphologically rich languages like Turkish, Finnish, Korean, etc. This work was funded by a grant from the Qatar National Research Fund under its National Priorities Research Program (NPRP) award number NPRP 09-410-1-069. Reported experimental work was performed at Qatar University in collaboration with University of Illinois.
-
-
-
Aircraft scheduling on multiple runways
Background: Scheduling aircrafts on single or multiple runways is an important and difficult problem. This problem involves how aircrafts are sequenced on a runway and how they are assigned to runways has a significant impact on the utilization of the runways as well as on meeting the landing and departure target times. Most of the literature focuses on landing operations on a single runway as it is an easier problem to solve. Objective: This project was funded by Qatar Foundation to address the scheduling problem of both landing and departing aircrafts on multiple runways as they attempt to meet aircraft target times. The problem is further complicated when considering sequence-dependent separation times on each runway to avoid wake-vortex effects. Methods: This research project is based on a two-pronged approach. First, mathematical optimization models were developed to find optimal runway assignments and aircraft sequences on each runway. Due to the significant computational complexity of the problem, a second approach was developed to find near-optimal solution through the development of local search algorithms and metaheuristics, especially for larger problems. Results: Several optimization models were developed and the most effective one was selected to find solutions to the problem. The solution effectiveness was enhanced by developing valid inequalities to the mathematical program, which significantly reduced the computational time necessary to solve the problem. Optimal solutions were obtained for problem instances much more difficult than any accounted for in data of available literature. A scheduling index, local search algorithms and metaheuristics (Simulated Annealing and Metaheuristic for Randomized Priority Search-MetaRaPS) were also developed to solve the problem. The results show that optimal or near optimal solutions were obtained for all instances, and the value of the proposed approximate algorithms becomes more evident as the problem size increases. Conclusions: The research done in this project demonstrates that there is added value in assigning aircrafts to runways and sequencing them using more optimized methods than the most commonly used approach of first-come-first-served. This research has the potential to change how airports schedule aircrafts in order to increase the runway utilization and better meet the landing and departing target times.
-
-
-
A novel and efficient relaying scheme for next generation mobile broadband communication systems
Authors: Mohammad Obaidah Shaqfeh and Hussein AlnuweiriRelaying technologies have been designated as major new enabling technologies for next generation wireless broadband systems, such as 3GPP LTE-Advanced. The practical deployment of decode-and-forward (DF) relaying technologies as supported by the current standard is based on repetition coding, meaning that the relay regenerates the same codeword generated by the source node. This scheme is suboptimal. Nevertheless, it is preferred in practice due to its simple implementation. The optimal relaying schemes called cooperative coding are difficult to construct in practice and require heavy computation load at the receiver. Therefore, they are rarely implemented despite their prospected performance gains. As a simple and practical alternative, we propose a novel relaying scheme that provides a superior performance similar to the advanced cooperative coding techniques, but it is less complex to implement. Our novel scheme is called "decode-partial-forward" (DPF) because its fundamental concept of operation is based on making the relay forwards one part of the source message such that the receiver at the destination node can rely on the direct channel with the source to obtain the other part of the message. The DPF scheme is based favorably on repetition coding and maximal ratio combining techniques, which are standardized techniques with low-complexity and low computation load. Nevertheless, our novel scheme performs very close to the optimal bounds for the supported rates over relaying links. More specifically, the increase in the supported reliable transmission rates (bits/sec) using the proposed DPF scheme may exceed 30% of the supported rates using the conventional repetition coding DF scheme. Another major advantage of our proposed scheme is that it is easy to adapt flexibly based on the channel conditions of the three links of the relay channel (source-destination, source-relay, relay-destination), by adjusting the power and rate allocation at the source and relay using simple closed-form analytic formulas. Therefore, we believe that the DPF relaying scheme is an excellent option for practical deployment in the telecommunication standards due to its simplicity, adaptability and high spectral efficiency gains.
-
-
-
Adaptive multi-channel downlink assignment for overloaded spectrum-shared multi-antenna overlaid cellular networks
Authors: Redha Mahmoud Radaydeh, Mohmed-Slim Alouini and Khalid QaraqeOverlaid cellular technology has been considered as a promising candidate to enhance the capacity and extend the coverage of cellular networks, particularly indoors. The deployment of small cells (e.g. femtocells and/or picocells) in an overlaid setup is expected to reduce the operational power and to function satisfactorily with the existing cellular architecture. Among the possible deployments of small-cell access points is to manage many of them to serve specific spatial locations, while reusing the available spectrum universally. This contribution considers the aforementioned scenario with the objective to serve as many active users as possible when the available downlink spectrum is overloaded. The case study is motivated by the importance of realizing universal resource sharing in overlaid networks, while reducing the load of distributing available resources, satisfying downlink multi-channel assignment, controlling the aggregate level of interference, and maintaining desired design/operation requirements. These objectives need to be achieved in distributed manner in each spatial space with as low processing load as possible when the feedback links are capacity-limited, multiple small-cell access points can be shared, and data exchange between access points can not be coordinated. This contribution is summarized as follows. An adaptive downlink multi-channel assignment scheme when multiple co-channel and shared small-cell access points are allocated to serve active users is proposed. It is assumed that the deployed access points employ isotropic antenna arrays of arbitrary sizes, operate using the open-access strategy, and transmit on shared physical channels simultaneously. Moreover, each active user can be served by a single transmit channel per each access point at a time, and can sense the concurrent interference level associated with each transmit antenna channel non-coherently. The proposed scheme aims to identify a suitable subset of transmit channels in operating access points such that certain limits on the aggregate interference or number of serving access points are satisfied, while reducing the load of processing. The applicability of the results for some scenarios, including the identification of interference-free channels in operating access points is explained. Numerical and simulations results are shown to clarify achieved gains with the use of the proposed scheme under various operating conditions.
-
-
-
OPERETTA: An optimal deployable energy efficient bandwidth aggregation system
Authors: Karim Habak, Khaled Harras and Moustafa YoussefThe widespread deployment of varying networking technologies, coupled with the exponential increase in end-user data demand has led to the proliferation of multi-interface enabled devices. To date, these interfaces are mainly utilized independently based on network availability, cost, and user choice. While researchers have focused on simultaneously leveraging these interfaces by aggregating their bandwidths, these solutions however, have faced a steep deployment barrier and only focused on maximizing throughput while overlooking the energy awareness which is critical for mobile devices. We therefore developed OPERETTA, shown in Figure1, an optimal deployable energy efficient bandwidth aggregation system for mobile users. Our system does not require modifications to applications, legacy servers, network infrastructure, or client kernel. If legacy servers choose to adopt our system, however, OPERETTA dynamically leverages this to achieve higher performance gains. OPERETTA is built as a middle-ware that is responsible for scheduling various connections and/or packets to different interfaces. This middleware estimates application and network interface characteristics and utilizes these estimates to take the most appropriate scheduling decisions. We formulate our scheduling problem as a mixed integer programming problem that has a special structure allowing it to be efficiently solved. This formulation allows users to achieve a desired throughput with minimal energy consumed. We evaluate OPERETTA via prototype implementation on the Windows OS, as well as via simulation, and compare the results to the optimal achievable throughput and energy consumption. Our results show that, with no changes to the current legacy servers, OPERETTA can achieve up to 150% enhancement in throughput as compared to the current operating systems, with no increase in energy consumption. In addition, with only 25% of the servers being OPERETTA-enabled, the system performance reaches the throughput upper-bound. We ultimately demonstrate that OPERETTA achieves the goals of being optimal, energy-efficient, as well as easily and incrementally deployable.
-
-
-
CopITS: The first connected car standard-compliant platform in Qatar and the region
Authors: Fethi Filali, Hamid Menouar and Adnan Abu-DayyaGiven the clear impact that mobility has on economical and social development, the continuous increase in the number of vehicles coupled with the increase in mobility behavior of people are creating new problems and challenges that need to be holistically addressed to ensure safe, sustainable, efficient and environmentally friendly mobility systems. Cooperative Intelligent Transportations Systems (ITS) allow the transportation infrastructure, vehicles and people to be connected wirelessly (through WiFi-like technology, 3G, etc) and contribute to solve these new challenges. QMIC implemented a Connected Car (CopITS) platform that allows vehicles and road infrastructure to exchange, wirelessly, data packets enabling new cooperative applications for road safety (e.g. accident avoidance application), traffic efficiency (e.g. green light optimization application), and infotainment (e.g. media and data downloading). This platform implements the latest draft of the architecture developed within IEEE and ETSI and incorporates new enhancements in terms of communication protocols and mechanisms, which outperform existing ones by enhancing data transfer for vehicle-to-vehicle (V2V) and vehicle-to-infrastructure scenarios as well as the overall performance of the system. Simulation studies using an integrated communication/traffic simulator have been conducted to investigate important metrics like scalability, efficiency, and resilience of these mechanisms Implemented system and applications have been successfully tested in-lab and demonstrated during several events in Qatar using a real car (on-board unit) and a traffic light (roadside unit). Moreover, QMIC's Connected Car platform has been successfully tested in 2011 and 2012 Cooperative Mobility Services Interoperability tests by running all mandatory test cases, which demonstrated its interoperability with the implementations of other vendors and its conformance with the standards. We believe that QMIC's Connected Car platform is an important contribution towards putting Qatar on the world map of the "best connected countries" and being ready to host global events like the football FIFA World Cup 2022.
-
-
-
Distributed load balancing through a biomimetic self organisation framework
Authors: Ali Imran, Elias Yaacoub and Adnan Abu-DayyaIn wireless cellular systems, uneven traffic load among the cells increases call blocking rates in some cells and causes low resource utilisation in others and thus degrades user satisfaction and overall performance of the cellular system. Various centralised or semi-centralised Load Balancing (LB) schemes have been proposed to cope with this time persistent problem, however, a fully distributed Self Organising (SO) LB solution is still needed for the future cellular networks. To this end, we present a novel distributed LB solution based on an analytical framework developed on the principles of nature-inspired SO systems. A novel concept of super-cell is proposed to decompose the problem of "system-wide blocking minimization" into the local sub-problems in order to enable a SO distributed solution. Performance of the proposed solution is evaluated through system level simulations for both macro cell and femto cell based systems. Numerical results show that the proposed solution can reduce the blocking in the system close to an Ideal Central Control (ICC) based LB solution. The added advantage of the proposed solution is that it does not require heavy signalling overheads.
-
-
-
Performance analysis of switch-based multiuser scheduling schemes with adaptive modulation in spectrum sharing systems
Authors: Marwa Khalid Qaraqe, Mohamed Abdallah, Erchin Serpedin and Mohamed-Slim AlouiniBackground and Objective: Reliable high-speed data communication that supports multimedia application for both indoor and outdoor mobile users is a fundamental requirement for next generation wireless networks and requires a dense deployment of physically coexisting network architectures. Due to the limited spectrum availability, a novel interference-aware spectrum sharing concept is introduced where networks that suffer from congested spectrums (secondary-networks) are allowed to share the spectrum with other networks with available spectrum (primary-networks) under the condition that limited interference occurs to primary networks. The main objective is the development of multiuser access schemes for spectrum sharing systems whereby secondary users that are randomly positioned over the coverage area are allowed to share the spectrum with primary users under the condition that the interference observed at the primary receiver is below a predetermined threshold. Methods: Two scheduling schemes are proposed for selecting a user among those that satisfy the interference constraint and achieve an acceptable signal-to-noise ratio level. The first scheme selects the user that reports the best channel quality while the second is based on the concept of switched diversity where the base station scans the users in a sequential manner until an acceptable user is found. The proposed schemes operate under two power-adaptive settings that are based on the amount of interference available at the secondary transmitter. In the On/Off power setting, users transmit based on whether the interference constraint is met or not, while in the full power adaptive setting, users vary their transmission power to satisfy the interference constraint. Results: Monte Carlo simulations were used to verify the analytical results for the multiuser secondary system in terms of average spectral efficiency, system delay, and feedback. Conclusion: It is shown that scheduling users based on highest channel quality increases the average spectral efficiency, but is associated with a high feedback load. However, the switched scheduling scheme significantly decreased the feedback load but at the expense of a lower average spectral efficiency. Furthermore, it is shown that transmit power techniques increase the performance of spectrum sharing systems in terms of ASE as well as decrease system delay and feedback.
-
-
-
Human centric system for oil and gas quality and pipeline infrastructure monitoring in Qatar
Authors: Adnan Nasir, Ali Riza Ekti, Khalid A Qaraqe and Erchin SerpedinBackground and Objectives: Radio frequency identification (RFID) has paved the way for a plethora of monitoring applications in the field of oil and gas. Degradation in liquefied petroleum gas (LPG)/ liquefied natural gas (LNG) quality and pipe infrastructure can be a nuisance for the oil and gas industry in Qatar. Hence, a human centric cyber-physical-system (CPS) utilizing hybrid wireless technologies including RFIDs and other sensor motes can detect and prevent such hazards. CPS technique can be used for the oil and gas sector in Qatar with customized framework architecture, event detection and decision algorithms. The objective of this research is to allow maintainers and administrators to perceive and decide on top of the monitoring system to increase the performance and efficiency of the whole monitoring application. Methods: The sensors collect the data and send it to the base station through collaborating wireless technologies. At the base station the data is processed and algorithms were run to detect an event such as presence of moisture, abnormal pressure, temperature and defects in a pipe's infrastructure health. Human interaction will help to further refine the data for possible false alarms. Mobile applications can be used by the users/administrators to send details of a perceived event directly to the base station. Results: Experiments were performed on the moisture detection in Wireless Research Laboratory in Texas A&M University at Qatar. On the similar note, other sensors can also be associated with the RFIDs and their data can be relayed to the server. A framework architecture was proposed for the human centric approach of the detection and monitoring system. Conclusion: We propose a system where these RFID active tags with the sensors such as pressure, temperature, flow, strain etc., in collaboration with other wireless technologies, are able to send information regarding the gas quality and pipe's infrastructure health. The collaboration of RFIDs and other technologies helps us to create a smart human centric monitoring system. This will enhance maintenance and event detection such as the presence of moisture or strain on pipe's structure. It will additionally assist in the automatic adjustment of the valve's and pump's properties according to the detected events.
-
-
-
Identifying, implementing and recognizing Arabic accents in facial expressions for a cross-cultural robot
Authors: Amna Alzeyara, Majd Sakr and Micheline ZiadeeIn this work we attempt to understand the visual accents in Arabic facial expressions and create culture-specific facial expressions for a female multi-lingual, cross-cultural robot. This study will enable the investigation of the effect of creating such expressions on the quality of the human-robot interaction. Our work is twofold: we first identify the existence of accent variation in facial expressions across cultures, then we validate human recognition of these accents. Facial expressions embody culture and are crucial for effective communication; hence they play an important role in multi-lingual, cross-cultural human-robot interaction. Elfenbein and Ambady found that there are different accents in facial expressions, which are culture-specific, and that the differences in expressions between cultures can create misunderstandings [Elfenbein and Ambady, 2003]. Several studies compared American expressions with expressions from other cultures but none of them included Arabic facial expressions. There is no existing database for Arabic facial expressions. Consequently, we recorded videos of young Arab women narrating stories that express six emotions: happiness, sadness, surprise, fear, disgust, and disappointment. These videos were analyzed to extract Arabic accents in facial expressions. The expressions were then implemented on a 3D face model using the Facial Action Coding System (FACS). To evaluate the expressions we conducted a web-based, human-subject experiment directed at students and staff at Carnegie Mellon University in Qatar. Thirty-four participants were asked to choose the appropriate emotion for each expression and rate, on a ten-level Likert scale, the accuracy with which the expression represents the emotion. The cultural affiliation of the participants was recorded. Preliminary results show that Arabs are more likely to recognize the Arabic facial expressions over non-Arabs. To further support this conclusion the survey will be redistributed to a larger number of subjects from different cultural backgrounds and from different geographical areas.
-
-
-
Extending the reach of social-based context-aware ubiquitous systems
Authors: Dania Abed Rabbou, Abderrahmen Mtibaa and Khaled HarrasThe proliferation of mobile devices equipped with communication technologies such as WiFi, 3G, and 4G, coupled with the exponential growth of online social networking, have increased the demand for social-context-aware systems. These systems leverage social information provided by users with contextual awareness, particularly location, to provide real-time personalized services. With Euromonitor International indicating a 37.7 % growth in mobile phone penetration in the past five years--ICTQatar recently reporting that each household owns 3.9 devices on average--Qatar is positioned as a strong candidate not only for the consumption of such services, but also for researching, building, and testing solutions related to context-aware system challenges. We address some of these challenges and propose pragmatic solutions. Our contributions fall into the following two categories: (i) We design and implement SCOUT, a context-aware, ubiquitous system that enables real-time mobile services by providing contextually relevant information. This information can either be generated reactively based on user request, or proactively created and disseminated to potentially interested users. Our SCOUT prototype consists of an android-based mobile client interfacing with facebook's API and a load-balancing profile-matching server that interacts with a localization engine. (ii) Since mobile users may not all experience reliable mobile-to-core connectivity due to contention, cost, or lack of connectivity, we therefore extend the reach of context-aware services to disconnected mobile users by proposing a novel communication paradigm that leverages opportunistic communication. We first send the intended information to the smallest subset of connected users deemed sufficient to reach the disconnected destinations. This subset then selectively forwards, based on social profiles, this information to nodes that are more likely to meet the destinations. Our evaluation, via simulation, shows that our algorithm achieves an improvement of 25% to 80% compared to current communication paradigms, while reducing overhead by as much as 50%. With SCOUT currently operational, and based on the simulation results, our ongoing work includes integrating our communication paradigm into the real system. We are also working on integrating real-time group recommender systems that identify groups based on user location coupled with social information to provide real-time contextual group recommendations.
-
-
-
Technology intervention for the preservation of intangible cultural heritage with motion detecting technologies
By Muqeem KhanBackground: This trans-disciplinary study presents the initial outcomes of a key study undertaken to explore the role of augmented reality and motion detecting technologies in the context of Intangible Cultural Heritage (ICH) for museums-related environments. Initial prototypes are in the form of an interactive infrared camera based application for children to engage with an Aboriginal puppet, and Arabic calligraphic writings without touching any input devices. This study is unique as it tries to combine two extremes: the curation of historical intangible artifacts and their preservation through digital intervention. Objectives: This project aims to produce the following outcomes: *create a proof-of-concept ICH intelligent kinesthetic learning space; *evaluate and explore knowledge transfer opportunities of ICH afforded by peripheral games technology. The central research questions are: 1.Design: What do motion-capture and associated gaming technology experiences that are suitable for knowledge transfer of ICH in a museum situation look like? 2.Exemplified/perceived effectiveness: What is the contribution of this augmenting technology in terms of the perception of authentic and engaging learning environments? 3.Sustainability, scalability and interoperability: How can museums ensure ICH content is reusable and transferable? Methods and Results: The data will be collected and analyzed according to the differences in visitors' interactions and engagements. The data will be examined using (2 x 3 x 4) matrix triangulation strategy. The qualitative data will then be analyzed using quantitative methods such as Chi Square test and Analysis of Variance (ANOVA). The analysis will culminate the visitors' behaviors and further development of the motion-detecting prototype. It is anticipated that the results will clarify the visitors' frequency of interaction with ICH content and their length and quality of engagement with the prototype. Conclusions: Heritage-related intangible content is always restricted because of its non-physical nature and has never been fully embedded in an environment like museums and related exhibitions. The study explores alternative opportunities for knowledge transfer of ICH content that manifest with playfulness in order to elicit a deeper understanding of such intangible cultural artifacts. This study complements multiple disciplines, including heritage preservation, museum technologies and emerging interaction design.
-
-
-
Interference management for multi-user cooperative relaying networks
Authors: Aymen Omri and MAZEN HASNABackground & Objectives: As the electromagnetic spectrum resource is becoming highly scarce, improving spectral efficiency is becoming extremely important for the development of future wireless communication systems. Integrating cooperative relaying techniques into wireless communication systems sheds new light on better spectral efficiency. By taking advantage of the broadcast nature of wireless communications, cooperative transmission can be used in improving communication reliability, and enhancing power and spectrum efficiency. Moreover, comparing to other emerging techniques that could achieve similar performance advantages, such as multiple-input multiple-output (MIMO) technique, cooperative communication is superior in hardware feasibility and deployment flexibility. The important advantages of cooperative communication make it one of the promising techniques for future wireless communication systems. Recently, many cooperative communication schemes have been included in different cellular standards, such as WiMAX and LTE-Advanced. However, the promised throughput and diversity gain may be lost in the presence of interferences, and hence, interference management is very important for exploiting the benefits of cooperation. This requires the need to find proper methods to prevent the interference problems, which is the main target of our current research. Methods: In this study we introduce an efficient cooperative communications scheme which maximizes the received signal-to-noise ratio (SNR) while keeping the interference levels below a certain threshold. The introduced scheme is based on two relay selection methods: max (link 2) which is based on maximizing the SNR of the second hop, and max (e2e) which aims to maximize the end-to-end SNR of the relayed link. In this method, the perfect decoding-and-forward (DF) relaying protocol is used by the relays, and the maximum ratio combining (MRC) receiver is used to combine the direct and the best selected relay links. We derive exact closed form expressions for the probability density function (PDF) of the SNR, outage probability, average capacity, and average bit error probability for the introduced cooperative schemes. Results & Conclusion: Simulations are used to validate the analytical results and an agreement is observed. The results confirm the advantage of the introduced cooperation schemes in enhancing the wireless communication system and in interference management.
-
-
-
A novel wavelet-based multimodal compression scheme for joint image-signal compression
Authors: Larbi Boubchir, Tahar Brahimi, Régis Fournier and Amine Naït-AliBackground: Nowadays, by considering the important advances in multimedia and networks including telemedicine applications, the amount of information to store and/or transmit has dramatically increased over time. To overcome the limitations of transmission channels or storage systems, data compression is considered a useful tool. In general, data compression addresses the problem of reducing the amount of data required to represent any digital data including images and signals. This can be achieved by removing redundant information where the main challenge is to reduce the bit-rate while preserving a high quality of the information. Objective: This work aims to propose a new multimodal compression methodology allowing compression of jointly various data, not necessary of the same type, using only one codec. Method: The proposed joint signal-image compression methodology consists of inserting the wavelet coefficients of a decomposed signal in the details region of a wavelet transformed image at the finest scale (i.e., the highest frequency sub-bands: horizontal details, vertical details, or diagonal details) according to a spiral way. The mixture is afterwards compressed using the well-known SPIHT algorithm (Set Partitioning In Hierarchical Trees). This process is inverted by decoding the mixture, then separating the signal from the image using a separation function applied to the insertion detail sub-band area. Next, the signal and image are reconstructed using inverse wavelet transformation followed by a dequantization step. Figure 1 illustrates the corresponding compression scheme. Results: The proposed method was evaluated on medical images with biomedical signals. The experimental results obtained show that this method provides better performance compared to the basic version based on inserting the signal samples in the spatial domain of the image to the encoding phase. This is confirmed by an important obtained improvement in terms of PSNR (Peak Signal to Noise Ratio) and PRD% (Mean Square Difference).
-
-
-
Geographic Information Systems as a promising area of Education and Scientific research in Qatar
More LessGeographic information systems (GIS) (also known as Geospatial information systems or Geotechnology) are computer software and hardware systems designed to capture, store and manipulate all types of geographical data, as well as analyze, manage, and display geographic information for informing decision making. Users of GIS range from communities, research institutions, environmental scientists, health organisations, land use planners, businesses, and government agencies at all levels. Numerous examples of applications of GIS are available in many different journals and are frequent topics of presentations at conferences in the natural and social sciences. For a long time, GIS has been a well-established and independent discipline throughout American and European Universities; however, there is no single University in the Arab World or even in the Middle East--as far as the author knows--that offers an undergraduate program for this important discipline. Qatar can be the pioneer in this field in the region by offering such a program through Qatar University or any higher education institute in the state. The benefits of opening such a program are numerous to be calculated. Qualified human resources in the field of GIS are in high demand not only in the region but at international level as well. The U.S. Department of Labor has designated Geotechnology (GIS) as one of the three "mega-technologies" of the 21st century—right up there with Nanotechnology and Biotechnology. Opening GIS undergraduate program in Qatar will make the state The "Kaaba" of this science in the region and will attract students from different states, which will generate scientific and financial revenue. Moreover, the host institute of this suggested program can establish an international research centre for GIS to carry out studies for the benefit of the region and beyond. Implementing such a proposal will have great impact on the Computing and Information Technology Research discipline, which is one of Qatar's core research areas.
-