- Home
- Conference Proceedings
- Qatar Foundation Annual Research Forum Proceedings
- Conference Proceeding
Qatar Foundation Annual Research Forum Volume 2013 Issue 1
- Conference date: 24-25 Nov 2013
- Location: Qatar National Convention Center (QNCC), Doha, Qatar
- Volume number: 2013
- Published: 20 November 2013
401 - 500 of 541 results
-
-
Measurement Of Refractive Indices Of Ternary Mixtures Using Digital Interferometry And Multi-Wavelength Abbemat Refractometer
More LessAbstract Knowing the liquid mixtures properties are significantly important in scientific experimentation and technological development. Thermal diffusion in mixtures plays a crucial role in both nature and technology. The temperature and concentration coefficients of refractive indices so called the Contrast factors contribute to study in various fields including, crude oil experiments (SCCO) and the distribution of crude oil components. The Abbemat Refractometer and Mach-Zehnder Interferometer technique has been proven a precise, highly accurate, and non-intrusive method for measuring the refractive index of a transparent medium. Refractive indices for three ternary mixtures containing three hydrocarbon compositions and their pure components of Tetrtahydronaphtalenene (THN), Isobutylbenzen (IBB), and Dodecane (C12), used mainly in gasoline, were experimentally measured using both the Mach-Zehnder Interferometer and a multi-wavelength Abbemat refractometer. Temperature and concentration coefficients of refractive indices, or contrast factors, as well as their individual correlation to calculate refractive indices have been presented in this research. The experimental measurements were correlated with a wide range of temperatures and wavelengths over a broad range of compositions. The experimental measurements of the refractive indices were compared with those estimated by applying several mixing rules: Lorentz-Lorenz, Gladstone-Dale, Arago-Biot, Eykman, Wiener, Newton, and Oster predictive equations. The experimental values of refractive indices are in substantial agreement with the values obtained by L-L, G-D, and A-B equations, and excepting values obtained by Oster and Newton equations. The temperature, concentration and wavelength dependence of refractive index in mixtures agrees with published data. A comparison with available literature and mixing rules shows that new correlations can predict the experimental data with deviations of less than 0.001.
-
-
-
Top ten hurdles on the race towards the Internet of Things
By Aref MeddebWhile the Internet is continuously growing and unavoidable moving towards an ubiquitous internetworking technology, ranging from casual data exchange, home entertainment, security, military, healthcare, transportation, and business applications, to the Internet of Things (IoT) where anything can communicate anywhere, anytime, there is an increasing and urgent need to define global standards for future Internet architectures as well as service definitions. In the Future, the Internet will perhaps become the most exciting entertainment mean. It is very likely that we would be able to make 3D conferencing at home, using holographic projection, spatial sound and haptic transmission. Further, we may no longer use traditional devices such as laptop and desktop computers and cell phones. Monitors and speakers will be completely different ranging from 3D-glasses and headsets to 3D-contact lenses, and in-ear wireless earphones. Keyboard and mouse will disappear as well and will be replaced by biometrics such as voice, iris and finger print recognition, and movement detection. Many projects are being proposed to support and promote the Internet of Things worldwide, ranging from Initiatives (http://www.iot-i.eu/public), Alliances (http://www.ipso-alliance.org/), Forums (http://www.iot-forum.eu/), Consortiums (http://iofthings.org/), Architectures (http://www.iot-a.eu/), Research Clusters (http://www.internet-of-things-research.eu/), and the list goes on. There is seemingly a race towards the leadership on the Internet of things, to the extent that often, driven by "time-to-market" constraints and business requirements, most proposals made so far fail to consider all facets required to deliver the expected services, and often lead to ambiguous and even contradictory definitions of what the Internet of Things is meant to be. For example, some proposals assimilate IoT to Web 3.0, while others claim it is primarily based on RFID and systems alike. Between web semantics and cognitive radio communications, there is a huge gap that needs to be filled out by adequate communication, security, and application software and hardware. Unfortunately, so far, there is no general consensus on the most suitable underlying technology and applications that will help the Internet of Things become a reality. In this presentation, we describe ten of the most challenging hurdles that IoT actors need to face in order to reach their goals. Among these hurdles, we identify the top-ten issues as 1) how to safely migrate from IPv4 to IPv6 given the proliferation of IPv4 devices, 2) Regulation, Pricing and Neutrality of the Future Internet, 3) OAM tools required for accounting and monitoring Future Internet Services, 4) Future Quality of Service definitions and mechanisms, 5) Reliability of Future Internet Services, 6) Future Security Challenges, 7) Scalability of the Internet of Things with its billions of interworked devices, 8) Future Access (local loop) technologies, 9) Future Transport (long distance) telecommunication technologies, and 10) Future End-user Devices and Applications. We aim to briefly explain and highlight the impact of each one of these issues on the service delivery in the context of the Internet of things. We also intend to describe some existing and promising solutions available so far.
-
-
-
The HAIL platform for big health data
Authors: Syed Sibte Raza Abidi, Ali Daniyal, Ashraf Abusharekh and Samina AbidiBig data analytics in health is an emerging area due to the urgent need to derive actionable intelligence from the large volumes of healthcare data to efficiently manage the healthcare system and to improve health outcomes. In this paper we present a ‘big’ healthcare data analytics platform—termed as Healthcare Analytics for Intelligence and Learning (HAIL)—that is an end-to-end healthcare data analytics solution to derive data-driven actionable intelligence and situational awareness to inform and transform health decision-making, systems management and policy development. The innovative aspects of HAIL are: (a) the integration of data-driven and knowledge-driven analytics approaches, (b) a sand-box environment for healthcare analysts to develop and test health policy/process models by exploiting a range of data preparation, analytical and visualization methods, (c) the incorporation of specialized healthcare data standards, terminologies and concept maps to support data analytics, and (d) text analytics to analyze unstructured healthcare data. The architecture of HAIL comprises the following four main modules (fig 1): (A) Health Data Integration module that entails a semantics-based metadata manager to synthesize health data originating from a range of healthcare institutions to formulate a rich contextualized data resource. The data integration is achieved through ETL workflows designed by health analysts and researchers, (B) Health Analytics Module provides a range of healthcare analytics capabilities including, (i) Exploratory Analytics using data mining to perform data clustering, classification and association tasks, (ii) Predictive Analytics to predict future trends/outcomes derived from past observations of the healthcare processes, (iii) Text Analytics to analyze unstructured texts (such as clinical notes, discharge summaries, referral notes, clinical guidelines, etc.), (iv) Simulation-based Analytics to simulate what-if questions based on simulation models, (v) Workflow analytics to interact with modeled clinical workflows to understand the affects of various confounding factors, (vi) Semantic Analytics to infer contextualized relationships, anomalies and deviations through reasoning over a semantic health data model, and (vii) Informational Analytics to present summaries, aggregations, charts and reports, (C) Data Visualization Module offers a range of interactive data visualizations, such as geospatial visualizations, causal networks, 2D and 3D graphs, pattern clusters and interactive visualizations to explore high dimensional data, (D) Data Analytics Workbench is an interactive workspace to enable data health analysts to specify and set-up their analytics process in terms of data preparation, selection and set-up of analytical methods and the selection of visualization methods. Using the workbench analysts can design sophisticated data analytics workflows/models using a range of data integration, analytical and visualization methods. The HAIL platform is available via a web portal and the desktop application, and it is deployed on a cloud infrastructure. HAIL can be connected with existing health data sources to provide front-end data analytics. Fig 2 shows the technical architecture. The HAIL platform has been applied to analyze a real-life clinical healthcare situations using actual data from the provincial clinical data warehouse. We will present two case studies of the use of HAIL. In conclusion, the HAIL platform addresses a critical need for healthcare analytics to impact health decision-making, systems management and policy development.
-
-
-
Utility of a comprehensive Personal Health Record (PHR) strategy
More LessA fundamental component of improving health care quality is to include the patient as a participant in achieving their health goals and making decisions about their care. While many may see clinical information systems as a way to improve communication and track health information for clinicians, the role played by personal health records (PHRs) should not be overlooked in developing a modernized Integrated Health Management system. This presentation will outline the positives and negatives of sharing clinical data between providers and their patients, outline a few PHR functionalities that can support care coordination and health outcomes, and describe the components of a national strategy to support the development of PHRs. A PHR, also referred to as a patient portal, is a system that allows an individual to keep track of their health information. While access to information is important for patients, doctors may feel uncomfortable with the idea of sharing information with their patients in this manner. Alternatively, patients may worry about sharing information they would like to keep private. These concerns can often be controlled with the right policies to prevent unexpected use and sharing of the information such as: * providing data segmentation functionalities that allow patients to share only selected data with certain providers, * sharing items such as medications and test results, but not sharing other information that providers might like to keep private such as "notes". Many PHRs draw information out of an existing clinical information system maintained by the physician. If a patient sees providers over multiple systems, the information can still be difficult to track. The most effective PHR allows data from any system to be included. To increase usefulness, a PHR may also store information that the patient adds themselves and provide tools such as medication reminders. Ideally, if a patient is seeing different doctors, those doctors will have information from everyone on the care team, but this does not always happen. If the patient has access to the information, they can share it with a range of providers- improving care coordination at the point of care, where it matters most. A PHR can be useful to all members of a population, and should be considered when planning a clinical information system implementation strategy. However, because different hospitals may provide different systems, it is helpful to have a government-wide strategy. For example, in the United States the government has developed a lightweight standard technical infrastructure based on secure messaging which is being used to populate PHRs. The government also supports development of governance to outline standard policies that ensure all stakeholders can trust the information in the system. A strategy to coordinate the use of PHRs is envisioned to have potential in a number of areas, from improvements in individual health to leveraging these systems to improve research on quality and health behaviors.
-
-
-
Service-based Internet Infrastructure for the Qatar World Cup 2022
More LessThe shortcomings of today's Internet and the high demand for complex and sophisticated applications and services drive a very interesting and novel research area called Future Internet. The area of Future Internet research focuses on developing a new network with similar magnitude as today's Internet but with more demanding and complex design goals and specifications. It strives to solve the issues identified in today's Internet by capitalizing on the advantages of emerging new technologies in the area of computer networking such as Software Defined Networking (SDN), autonomic computing, and cloud computing. SDN represents an extraordinary opportunity to rethink computer networks, enabling the design and deployment of a future Internet. Utilizing these technologies leads to a significant progress in the development of an enhanced, secure, reliable, and scalable Internet. In 2022, Qatar will host the world cup where more than 5 Million people from all over the world are expected to attend. This event is expected to put the host country, Qatar, under massive pressure and huge challenges in terms of providing high quality Internet service especially with the increasing demand for emerging applications such as video streaming service over mobile devices. It is vital to evaluate and deploy state-of-the-art technologies based on a promising future Internet infrastructure in order to provide high-quality Internet services unique to this event and to the continuing rapid growth of the country. The main goal in this paper is to present a new network design with similar magnitude as today's Internet but with more demanding and complex design goals and specifications defined by the knowledge and experience gathered from four decades of using the current Internet. This research focuses on the development of a complete system, designed with the new requirements of the Future Internet in mind, and aims to provide, monitor and enhance the increasing demand for popular video streaming service. The testing environment was built using the Global Environment for Network Innovations (GENI) testbed (see Figure 1). GENI, a National Science Foundation (NSF) funded research and development effort, aims to build a collaborative and exploratory network experimentation platform for the design, implementation and evaluation of future networks. Because GENI is meant to enable experimentation with large-scale distributed systems, experiments will in general always contain multiple communicating software entities. Large experiments may well contain tens of thousands of such communicating entities, spread across continental or transcontinental distances. The conducted experiments illustrate how such a system can function under unstable and changing network conditions, dynamically learn its environment, recognize potential service degradation problems, and react to these challenges in an autonomic manner without the need for human intervention.
-
-
-
Advanced millimeter-wave concurrent 44/60GHz phased array for communications and sensing
Authors: Jaeyoung Lee, Cuong Huynh, Juseok Bae, Donghyun Lee and Cam NguyenWireless communications and sensing have become an indispensable part of our daily lives from communications, public service and safety, consumer, industry, sports, gaming and entertainment, asset and inventory management, banking to government and military operations. As communications and sensing are poised to address challenging problems to make our lives even better under environments that can potentially disrupt them, like highly populated urban areas, crowded surroundings, or moving platforms, considerable difficulties emerge that greatly complicate communications and sensing. Significantly improved communication and sensing technologies become absolutely essential to address these challenges. Phased arrays allow RF beams carrying the communication or sensing information to be rapidly steered or intercepted from different angles across areas with particular amplitude profiles electronically, enabling swift single- or multi-point communications or sensing over large areas or across many targets while avoiding potentially disrupting obstacles. They are particularly attractive for creating robust communication links for both line-of-sight (LOS) and non-line-of-sight (NLOS) due to their high directivity and scanning ability. In this talk, we will present a novel millimeter-wave dual-band 44/60 GHz phased-array frontend capable of two-dimensional scanning with orthogonal polarizations. This phased array particularly resolves the "RF signal leakage and isolation dilemma" encountered in the existing phased-array systems. It integrates "electrically" the phased-array functions in two separate millimeter-wave bands into a single phased array operating concurrently in dual-band. These unique features, not achievable with existing millimeter-wave phased arrays, will push the phased-array system performance to a next level, while reducing size and cost, and enhance the capability and applications for wireless communications and sensing, particularly when multifunction, multi-operation, multi-mission over complex environments with miniature systems become essential. This phased array enables vast communication and sensing applications, either communication or sensing or both simultaneously - for instance, concurrent Earth-satellite/inter-satellite communications, high-data-rate WPANs and HDMI, and accurate, high-resolution, enhanced-coverage multi-target sensing.
-
-
-
Baseband DSP for dirty RF front-ends: Theory, algorithms, and test bed
Authors: Ozgur Ozdemir, Ridha Hamila and Naofal Al-DhahirOrthogonal Frequency Division Multiplexing (OFDM) is widely adopted as the transmission scheme of choice for almost all broadband wireless standards (including WLAN, WiMAX, LTE, DVB, etc.) due to its multipath resilience and practical implementation complexity. The direct-conversion radio frequency (RF) front-end architecture , where the down-conversion to baseband is accomplished in a single stage, requires fewer analog components compared to the super-heterodyne architecture where the down-conversion is accomplished with one or more intermediate frequency (IF) stages. Fewer analog components result in reduced power consumption and cost. However, direct-conversion OFDM based broadband wireless transceivers suffer from several performance-limiting RF/analog impairments including I/Q imbalance and phase noise (PHN) [1]. I/Q imbalance refers to the amplitude and phase mismatches between the in-phase (I) and quadrature (Q) branches at the transmit and receive sides. In an OFDM system with I/Q imbalance, the transmitted signal at a particular subcarrier is corrupted by interference from the image subcarrier. To compensate for the effect of I/Q imbalance, the received signals from each subcarrier and its image subcarrier are processed jointly. PHN refers to the random unknown phase difference between the carrier signal and the local oscillator. In an OFDM transceiver, PHN rotates the signal constellation on each sub-carrier and causes inter-carrier interference between the sub-carriers resulting in significant performance degradation. In this project, we designed novel algorithms for the efficient estimation and compensation of RF impairments including I/Q imbalance and PHN for beamforming OFDM systems such as 4G LTE cellular systems and 802.11 wireless local area networks (WLAN) [2]. Conventional OFDM transceivers ignore I/Q imbalance effects and each OFDM subcarrier is processed separately which causes an irreducible error floor in the bit error rate performance. However, in our proposed method, each OFDM subcarrier and its image subcarrier are processed jointly to mitigate the effects of I/Q imbalance. This novel method is capable of eliminating the error floor and obtaining performance close to the ideal case where no I/Q imbalance exists. Furthermore, we have developed an experimental OFDM testbed to implement the proposed algorithms. Our testbed uses the Universal Software Radio Peripheral (USRP) N210 RF frontends and it is based on packet-based OFDM similar to the IEEE 802.11.a WLAN standard. The baseband processing is done in MATLAB where the MATLAB driver for USRP is used for stream processing of the transmitted and received signals. The measured experimental results demonstrate that the proposed algorithms improve the performance significantly at low implementation complexity. [1] B. Razavi, RF Microelectronics. Englewood Cliff, NJ: Prentice-Hall, 1998. [2] O. Ozdemir, R. Hamila, and N. Al-Dhahir, “I/Q imbalance in multiple beamforming OFDM transceivers: SINR analysis and digital baseband compensation,” IEEE Trans. on Comunications, vol. 61, no. 5, pp. 1914-1925, May 2013. Acknowledgement: This work is supported by Qatar National Research Fund (QNRF), Grant NPRP 09-062-2-035.
-
-
-
MobiBots: Towards detecting distributed mobile botnets
Authors: Abderrahmen Mtibaa, Hussein Alnuweiri and Khaled A HarrasIt is widely known that the state of a patient's coronary heart disease can be better assessed using intravascular ultrasound (IVUS) than with more conventional angiography. Recent work has shown that segmentation and 3D reconstruction of IVUS pull-back sequence images can be used for computational fluid dynamic simulation of blood flow through the coronary arteries. This map of shear stress in the blood vessel walls can be used to predict susceptibility of a region of the arteries to future arteriosclerosis and disease. Manual segmentation of images is time consuming as well as cost prohibitive for routine diagnostic use. Current segmentation algorithms do not achieve a high enough accuracy because of the presence of speckle due to blood flow, relatively low resolution of images and presence of various artifacts including guide-wires, stents, vessel branches, and some other growth or inflammations. On the other hand, the image may be induced with additional blur due to movement distortions, as well as resolution-related mixing of closely resembling pixels thus forming a type of out-of-focus blur.. Robust automated segmentation achieving high accuracy of 95% or above has been elusive despite of work by a large community of researchers in the machine vision field. We propose a comprehensive approach where a multitude of algorithms are applied simultaneously to the segmentation problem. In an initial step, pattern recognition methods are used to detect and localize artifacts. We have achieved a high accuracy of 95% or better in detecting frames with stents and location of guide-wire in a large data-set consisting of 15 pull-back sequences with about 1000 image frames each. Our algorithms for lumen segmentation using spatio-temporal texture detection and active contour models have achieved accuracies approaching 70% in the same data-set which is on the high-side of reported accuracies in the literature. Further work is required to combine these methods to increase segmentation accuracy. One approach we are investigating is to combine algorithms using a meta-algorithmic approach. Each segmentation algorithm computes along with the segmentation a measure of confidence in the segmentation which can be biased on prior information about the presence of artifacts. A meta-algorithm then runs a library of algorithms on a sub-sequence of images to be segmented and chooses the segmentation based on computed confidence measures. Machine learning and testing is performed on a large data base. This research is in collaboration with Brigham and Women Hospital in Boston that provides well over 45,000 frames of data for the study.
-
-
-
Completely automated robust segmentation of intravascular ultrasound images
By Chi Hau ChenIt is widely known that the state of a patient's coronary heart disease can be better assessed using intravascular ultrasound (IVUS) than with more conventional angiography. Recent work has shown that segmentation and 3D reconstruction of IVUS pull-back sequence images can be used for computational fluid dynamic simulation of blood flow through the coronary arteries. This map of shear stress in the blood vessel walls can be used to predict susceptibility of a region of the arteries to future arteriosclerosis and disease. Manual segmentation of images is time consuming as well as cost prohibitive for routine diagnostic use. Current segmentation algorithms do not achieve a high enough accuracy because of the presence of speckle due to blood flow, relatively low resolution of images and presence of various artifacts including guide-wires, stents, vessel branches, and some other growth or inflammations. On the other hand, the image may be induced with additional blur due to movement distortions, as well as resolution-related mixing of closely resembling pixels thus forming a type of out-of-focus blur.. Robust automated segmentation achieving high accuracy of 95% or above has been elusive despite of work by a large community of researchers in the machine vision field. We propose a comprehensive approach where a multitude of algorithms are applied simultaneously to the segmentation problem. In an initial step, pattern recognition methods are used to detect and localize artifacts. We have achieved a high accuracy of 95% or better in detecting frames with stents and location of guide-wire in a large data-set consisting of 15 pull-back sequences with about 1000 image frames each. Our algorithms for lumen segmentation using spatio-temporal texture detection and active contour models have achieved accuracies approaching 70% in the same data-set which is on the high-side of reported accuracies in the literature. Further work is required to combine these methods to increase segmentation accuracy. One approach we are investigating is to combine algorithms using a meta-algorithmic approach. Each segmentation algorithm computes along with the segmentation a measure of confidence in the segmentation which can be biased on prior information about the presence of artifacts. A meta-algorithm then runs a library of algorithms on a sub-sequence of images to be segmented and chooses the segmentation based on computed confidence measures. Machine learning and testing is performed on a large data base. This research is in collaboration with Brigham and Women Hospital in Boston that provides well over 45,000 frames of data for the study.
-
-
-
OSCAR: An incentive-based collaborative bandwidth aggregation system
More LessThe explosive demand for mobile data, predicted to reach a 25 to 50 times increase by 2015, along with expensive data roaming charges, and user expectation to remain continuously connected, are all creating novel challenges for service providers and researchers. A potential approach for solving these problems, is exploiting all communication interfaces available on modern mobile devices in both solitary and collaborative forms. In the solitary form, the goal is to exploit any direct Internet connectivity on any of the available interfaces by distributing application data across them in order to achieve higher throughput, minimize energy consumption, and/or minimize cost. In the collaborative form, the goal is to enable and incentivize mobile devices to utilize their neighbors' under-utilized bandwidth in addition to their own direct Internet connections. Despite today's mobile devices being equipped with multiple interfaces, there has been a high deployment barrier for adopting collaborative multi-interface solutions. In addition, these solutions focus on bandwidth maximization without paying sufficient attention to energy efficiency and effective incentive systems. We present OSCAR, a multi-objective, incentive-based, collaborative, and deployable bandwidth aggregation system that fulfills the following requirements: (1) It is easily deployable by not requiring changes to legacy servers, applications, or network infrastructure (i.e. adding new hardware like proxies and routers). (2) It seamlessly exploits available network interfaces in solitary and collaborative forms. (3) It adapts to real-time Internet characteristics and varying system parameters to achieve efficient utilization of these interfaces. (4) It is equipped with an incentive system that encourages users to share their bandwidth with others. (5) It adopts an optimal multi-objective and multi-modal scheduler that maximizes the overall system throughput, while minimizing cost and energy consumption based on user requirements and system status. (6) It leverages incremental system adoption and deployment to further enhance performance gains. A typical scenario for OSCAR is shown in Figure 1. Our contributions are summarized as follows: (1) Designing the OSCAR system architecture that fulfills the requirements above. (2) Formulating OSCAR's data scheduler as an optimal multi-objective, multi-modal scheduler that takes user requirements, device context information, and application requirements into consideration, while distributing application data across multiple local and neighboring devices interfaces. (3) Developing the OSCAR communication protocol that implements our proposed credit-based incentive system, and enables secure communication between the collaborating nodes and the OSCAR enabled servers. We evaluate OSCAR via implementation on Linux, as well as via simulation, and compare the results to the optimal achievable throughput, cost, and energy consumption rates. The OSCAR system, including its communication protocol, is implemented over the Click Modular Router framework in order to demonstrate its ease of deployment. Our results, which are verified via NS2 simulations, show that with no changes to current Internet architectures, OSCAR reaches the throughput upper-bound. It also provides up to 150% enhancement in throughput compared to current Operating Systems without changes to legacy servers. Our results also demonstrate OSCAR's ability to maintain cost and the energy consumption levels in the user-defined acceptable ranges.
-
-
-
Towards image-guided, minimally-invasive robotic surgery for partial nephrectomy
Introduction: Surgery remains one of the primary methods for terminating cancerous tumours. Minimally-invasive robotic surgery, in particular, provides several benefits, such as filtering of hand tremor, offering more complex and flexible manipulation capabilities that lead to increased dexterity and higher precisions, and more comfortable seating for the surgeon. All in turn lead to reduced blood loss, lower infection and complication rates, less post-operative pain, shorter hospital stays, and better overall surgical outcomes. Pre-operative 3D medical imaging modalities, mainly magnetic resonance imaging (MRI) and computed tomography (CT) are used for surgical planning, in which tumour excision margins are identified for maximal sparing of healthy tissue. However, transferring such plans from the pre-operative frame-of-reference to the dynamic intra-operative scene remains a necessary yet largely unsolved problem. We summarize our team's progress towards addressing this problem focusing on partial nephrectomy (RAPN) performed with a daVinci surgical robot. Method: We perform pre-operative 3D image segmentation of the tumour and surrounding healthy tissue using interactive random walker image segmentation, which provides an uncertainty-encoding segmentation used to construct a 3D model of the segmented patient anatomy. We reconstruct the 3D geometry of the surgical scene from the stereo endoscopic video, regularized by the patient-specific shape prior. We process the endoscopic images to detect tissue boundaries and other features. Then we align, first via rigid then via deformable registration, the pre-operative segmentation to the 3D reconstructed scene and the endoscopic image features. Finally, we present to the surgeon an augmented reality view showing an overlay of the tumour resection targets on top of the endoscopic view, in a way that depicts uncertainty in localizing the tumour boundary. Material: We collected pre-operative and intra-operative patient data in the context of RAPN including stereo endoscopic video at full HD 1080i (da Vinci S HD Surgical System), CT images (Siemens CT Sensation 16 and 64 slices), MR images (Siemens MRI Avanto 1.5T), and US images (Ultrasonix SonixTablet with a flexible laparoscopic linear probe). We also acquired CT images and stereo video from in-silico phantoms and ex-vivo lamb kidneys with artificial tumours for test and validation purposes. Results and Discussion: We successfully developed a novel proof-of-concept framework for prior and uncertainty encoded augmented reality system that fuses pre-operative patient specific information into the intra-operative surgical scene. Preliminary studies and initial surgeons' feedback on the developed augmented reality system are encouraging. Our future work will focus on investigating the use of intra-operative US data in our system to leverage all imaging modalities available during surgeries. Before a full system integration of these components, improving accuracy and speed of aforementioned algorithms, and the intuitiveness of the augmented reality visualization, remain active research projects for our team.
-
-
-
Summarizing machine translation text: An English-Arabic case study
Authors: Houda Bouamor, Behrang Mohit and Kemal OflazerMachine Translation (MT) which has been championed as an effective technology for knowledge transfer from English to languages with less digital content. An example of such efforts is the automatic translation of English Wikipedia to languages with smaller collections, such as Arabic. However, MT quality is still far from ideal for many of the languages and text genres. While translating a document, many sentences are poorly translated which can provide an incorrect text, and confuse the reader. Moreover, some of these sentences are not as informative and could be summarized to make a more cohesive document. Thus, for tasks in which complete translation is not mandatory, MT can be effective if the system can provide an informative subset of the content with higher translation quality. For this scenario, text summarization can provide effective support for MT by keeping only the most important and informative parts of a given document to translate. In this work, we demonstrate a framework of MT and text summarization which replaces the baseline translation with a proper summary that has higher translation quality than the full translation. For this, we combine the state of the art English summarization system and a novel framework for prediction of MT quality without references. Our framework is composed of the following major components: (a) a standard machine translation system, (b) a reference-free MT quality estimation system, (c) an MT-aware summarization system, and (d) an English-Arabic sentence matcher. More specifically, our English-Arabic system reads in an English document along with its baseline Arabic translation and outputs, as a summary, a subset of the Arabic sentences based on their informativeness and their translation quality. We demonstrate the utility of our system by evaluating it with respect to its translation and summarization quality and demonstrate that we can balance between improving MT quality and maintaining a decent summarization quality. For summarization, we conduct both reference-based and reference-free evaluations and observe a performance in the range of the state of the art system. Moreover, the translation quality of the summaries shows an important improvement against the baseline translation of the entire documents. This MT-aware summarization approach can be applied to translation of texts such as Wikipedia articles. For such domain-rich articles, there is a large variation of translation quality across different sections. An intelligent reduction of the translation tasks results in improved final outcome. Finally, the framework is mostly language independent and can be easily customized for different target languages and domains.
-
-
-
Distributed algorithms in wireless sensor networks: An approach for applying binary consensus in large testbeds
More LessOur work represents a new starting point for a wireless sensor network implementation of a cooperative algorithm called the binary consensus algorithm. Binary consensus is used to allow a collection of distributed entities to reach consensus regarding the answer to a binary question and the final decision is based on the majority opinion. Binary consensus can play a basic role in increasing the accuracy of detecting event occurrence. Existing work on binary consensus focuses on simulation of the algorithm in a purely theoretical sense. We have adapted the binary consensus algorithm for use in wireless sensor networks. This is achieved by specifying how motes find partners to update state with as well as by adding a heuristic for individual motes to determine convergence. In traditional binary consensus, individual nodes do not have a stop condition, meaning nodes continue to transmit even after convergence has occurred. In WSNs however, this is unacceptable since it will consume power. So in order to save power sensor motes should stop the communication when the whole network converges. For that reason we have designed a tunable heuristic value N that will allow motes to estimate when convergence has occurred. We have evaluated our algorithm successfully in hardware using 139 IRIS sensor motes and have further supported our results using the TOSSIM simulator. We were able to minimize the convergence time reaching optimal results. The results also show that as the network gets denser the convergence time will lower. In addition, convergence speed depends on the number of motes present in the network. During the experiments none of the motes failed and our algorithm converged correctly. The hardware as well as the simulation results demonstrate that the convergence speed depends on the topology type, the number of motes present in the network, and the distribution of the initial states.
-
-
-
Silicon radio-frequency integrated-circuits for advanced wireless communications and sensing
By Cam NguyenSilicon-based Radio-Frequency Integrated Circuit (RFIC) hardware is the backbone of advanced wireless communication and sensing systems, enabling low-cost, small-size, and high-performance single-chip solution. Advanced RF wireless systems and in turn, silicon RFICs, are relevant not only to commercial and military applications, but also to national infrastructures. This importance is even more pronounced as the development of civilian technologies becomes increasingly important to the national economic growth. New applications utilizing silicon RFIC technologies continue to emerge - spanning across spectrums - from ultra-wideband to millimeter-wave and submillimeter-wave ultra-high-capacity wireless communications; from sensing for airport security and inventory for gas and oil; and from detection and inspection of buried underground oil and gas pipes to wireless power transmission and data communications for smart wells. In this talk, we will present some of our recent developments of silicon RFICs for advanced wireless communications and sensing.
-
-
-
PLATE: Problem-based learning authoring and transformation environment
More LessThe Problem-based Learning Authoring and Transformation Environment (PLATE) project seeks to improve student learning using innovative approaches to problem-based learning (PBL) in a cost-effective, flexible, interoperable, and reusable manner. Traditional subject-based learning that focuses on passively learning facts and reciting them out of context is no longer sufficient to prepare potential engineers and all students to be effective. Within the last two decades, the problem-based learning approach to education has started to make inroads into engineering and science education. This PBL educational approach comprises an authentic, ill-structured problem with multiple possible routes to multiple possible solutions. The PBL approach offers unscripted opportunities for students to identify personal knowledge gaps as starting points for individual learning. Additionally, it requires a facilitator (not a traditional teacher) who guides learning by asking probing questions that model expert cognitive reasoning and problem solving strategies. Bringing real-life context and technologies into the curriculum through a problem based learning approach encourages students to become independent workers, critical thinkers, problem solver, lifelong learners, and team workers. A systematical approach to support online PBL is the use of a pedagogy-generic e-learning platform such as IMS Learning Design (IMS-LD 2003), which is an e-learning technical standard useful to script a wide range of pedagogical strategies as formal models. This PLATE project uses the IMS-DL strategies. It seeks to research and develop a process modeling approach together with software tools to support the development and delivery of face-to-face, online, and hybrid PBL courses or lessons in a cost-effective, flexible, interoperable, and reusable manner. The research team seeks to prove that the PLATE authoring system optimizes learning and that the PLATE system improves learning in PBL activities. For this poster presentation, the research team will demonstrate the progress it has made within the first year of research. This includes the development of a prototype PBL scripting language to represent a wide range of PBL models, the creation of transformation functions to map PBL models represented in the PBL scripting language into the executable models represented in IMS-LD, and the architecture of the PLATE authoring tool. The research team plans to illustrate that the research and development of a PBL scripting language and the associated authoring and execution environment can provide a significant thrust toward further research of PBL by using meta-analysis, designing effective PBL models, and extending or improving a PBL scripting language. The team believes that PBL researchers can use the PBL scripting language and authoring tools to create, analyze, test, improve, and communicate various PBL models. The PLATE project can enable PBL practitioners to develop, understand, customize, and reuse PBL models at a high level by relieving the burdens of handling complex details to implement a PBL course. The research team believes that the project will stimulate the application and use of PBL in curricula with online learning practice by incorporating PBL support into popularly used e-learning platforms and by providing a repository of PBL models and courses.
-
-
-
Advance In Adaptive Modulation For Fading Channels
More LessSmart phones are becoming dominant handsets that are available to wireless technology users. Wireless access to internet is also becoming the default scenario with vast majority of internet users. The increasing demand for high speed wireless internet services makes current technologies meet their limit due to channel impairments. The conventional adaptive modulation technique (CAMT) is no longer helpful due to high data rate requirements of new technologies and wireless video streaming in addition to other applications such as downloading large files. The CAMT is one of the developed approaches that are considered a powerful techniques that are currently being used in advanced wireless communication systems such as long term evolution (LTE) technology. This technique is used to enhance energy efficiency and increase spectral efficiency of wireless communication systems over fading channels. The CAMT dynamically changes modulation schemes based on channel conditions to maximize throughput with minimum bit error rate (BER) based on channel state information of each user, which is sent back to transmitter by receiver side via reliable channel. The CAMT is based on predefined set of ranges signal to noise ratios (SNRs) for different orders of modulation schemes. The more increase in SNRs lead to higher level of ranges of SNRs set that lead to a higher modulation order. This will allow to higher transmission speed utilizing the good channel condition. In order to minimize BER, when channel condition degrades, modulation order is reduced, which result to lower spectral efficiency but more robust modulation scheme. The dynamicity of changing modulation order based on SNRs ranges of radio channel is the key part of CAMT to increase throughput and minimize the BER. However, this work proposes an advance in AMT that is based on utilizing more channel state information in addition to SNR ranges. The particular new information is related to how sever fading the channel experiences. The amount of severity in this work is measured with amount of fading (AF), which is computed by using the first and second central moments of envelope amplitude. This additional information helps to distinguish between different channel conditions that have same average set of SNR ranges but different levels of fading severity that may be utilized to increase performance of CAMT. The different levels of fading severity and similar sets of SNR ranges have been tested with Nakagami-m fading channels. The AF measure of fading severity is equal to 1/m in this radio fading channels. So, the investigation in this work is based on testing how to leverage the AF dimension in addition to the conventional approach used in CAMT. In this work we show that the BER of different modulation schemes depends on fading amount for every range of SNR defined by sets of AMT. Current results show dramatic improvements in BER performance and throughput when AF is leveraged with SNRs range set approach defined in CAMT. Utilization of AF with SNR ranges allow adapting higher modulation order in channel conditions that were not possible with conventional AMT.
-
-
-
Real-time multiple moving vehicle detection and tracking framework for autonomous UAV monitoring of urban traffic
More LessUnmanned Aerial Vehicles (UAVs) have the potential to provide comprehensive information for traffic monitoring, road conditions and emergency response. However, to enable autonomous UAV operations, video images captured by UAV cameras are processed by using state-of-the art algorithms for vehicle detection, recognition, and tracking. Nevertheless, processing of aerial UAV images is challenging due to the fact that the images are usually captured with low-resolution cameras, from high altitudes, and the UAV is in continuous motion. The latter enforces the need for decoupling the camera and scene motions and most of the techniques for moving vehicle detection perform ego motion compensation to separate camera motion from scene motion. To this end, registration of successive image frames is first carried out to match two or more images of the same scene taken at different times followed by moving vehicle labeling. Detected vehicles of interest are routinely tracked by the UAV. However, vehicle tracking in UAV imagery is challenging due to constantly changing camera vantage points, changes in illumination, and occlusions. The majority of the existing vehicle detection and tracking techniques suffer from reduced accuracy and/or entail intensive processing that prohibits their deployment onboard of UAVs unless intensive computational resources are utilized. This paper presents a novel multiple moving vehicle detection and tracking framework that is suitable for UAV traffic monitoring application. The proposed framework executes in real-time with improved accuracy and is based on image feature processing and projective geometry. FAST image features are first extracted and then outlier features are computed by using least median square estimation. Moving vehicles are subsequently detected with density-based spatial clustering algorithm. Vehicles are tracked by using Kalman filtering while an overlap-rate-based data association mechanism followed by tracking persistency check are used to discriminate between true moving vehicles and false detections. The proposed framework doesn't involve the explicit application of image transformations, i.e. warping, to detect potential moving vehicles which reduces computational time and decreases the probability of having wrongly detected vehicles due to registration errors. Furthermore, the use of data association to correlate detected and tracked vehicles along with the selective target's template update that's based on the data association decision, significantly improves the overall tracking accuracy. For quantitative evaluation, a testbed has been implemented to evaluate the proposed framework on three datasets: The standard DARPA Eglin-1 and the RedTeam datasets, and a home-collected dataset. The proposed framework achieves recall rates of 97.1 and 96.8 (average 96.9%), and precision rates of 99.1% and 95.8% (average 97.4%) for the Eglin-1 and RedTeam datasets, respectively, with overall average precision of (97.1%). And when evaluated on the dataset collected, it achieved 95.6% recall and 96.3% precision. Compared to other moving vehicle detection and tracking techniques found in the literature, the proposed framework achieves higher accuracy on average and is less computationally demanding. The quantitative results thus demonstrate the potential of the proposed framework for autonomous UAV traffic monitoring applications.
-
-
-
Mental task discrimination: Digital signal processing
More LessAbstract Objectives in this research is to increase the accuracy of the distinction between functions of various Kulaip through a careful analysis of brain electrical signals and electrical signals are out of the brain and can be received from sensors, especially in plants and solid enlarge and stored. It is characteristic of brain signals Abbaas Electrical signals are weak force and the Pacific area affected by humans, so it is also loaded with signal noise changed from its true value. Of my goals in this research is to arrive to delete possible from the noise signal from the brain signals Electrical and represented in a precise manner by factors and components unique to each signal and then train the system to these signals and it means the function of certain mental, and after the training system signals available for training begin the testing phase where the system will signal a new mentality, including the system by comparing the store to his posts, and then classified into one of mental functions that are stored for him. I have in this research to distinguish between five mental functions are: Baseline task1-when a person is completely relaxed in the case of Multiplication task2-mind when it calculates the multiplication operation is not simple Letter composing task3-imagine when the person writing and the formation of a character in his mind Rotation task4-When the person imagine a three-dimensional model of rotation around the axis Counting task 5 - when you imagine the person that writes numbers in sequence This means that one of the 83.7813% and thank God I was able to distinguish the origin of the five functions correctly by the health 100 electrical signal to the brain can identify correctly the 84 signal and we determine the brain function that she made this reference. There are high hopes might build on this research where possible to increase the number of functions of the mind which distinguishes them we reach the distinction between all functions of the mind and know what to think of human, and also the feelings, and also of the objectives of this research is to try to help people with special needs through know what is going on in their minds and meet their needs. I hope to benefit from this research all interested in this area and reach a new height in the analysis and understanding of the functions of the human mind and offer all support to fellow human with special needs.
-
-
-
Macro-small handover in LTE under energy constraints
By Nizar ZorbaGreen communications have emerged as one of the most important trends in wireless communications because of its several advantages of interference reduction, battery life increase and electrical bill cut. Its application to handover mechanisms is a crucial operation for its integration in practical systems, as handover is one of the most resource-consuming operations in the system, and it has to be optimized under the objective of green communications. On the other hand, a decrease in the energy consumption should not mean lower performance for the operator and customers. Therefore, this work will present a hybrid handover mechanism where two conflict objectives of load balancing and energy consumption are tackled, where the operator's objective is to balance the data load among its macro and small-cells, while the user equipment objective is to decrease the consumed energy in order to guarantee a larger battery life.
-
-
-
A concurrent tri-band low-noise amplifier for multiband communications and sensing
By Cam NguyenConcurrent multiband receivers receive and process multiple frequency bands simultaneously. They are thus capable of providing multitask or multifunction to meet consumer needs in modern wireless communications. Concurrent multiband receivers require at least some of their components to operate concurrently at different frequency bands which results in substantial reduction of cost and power dissipation. Fig. 1 shows a simplified concurrent multiband receiver, typically consisting of an off-chip antenna and on-chip low-noise amplifier (LNA) and mixer. While the mixer can be designed as a multiband or wideband component, the LNA should perform as a concurrent multiband device and hence requires proper input matching to the antenna, low noise figure (NF), high gain and high linearity to handle multiple input signals simultaneously. Therefore, the design of concurrent multiband LNA's is the most critical issue for implementation of fully integrated low-cost and low-power concurrent multi-band receivers. In this talk, we present a 13/24/35-GHz concurrent tri-band LNA implementing a novel tri-band load composed of two feedback notch filters. The tri-band load is composed of two passive LC notch filters with feedback. The tri-band LNA fabricated on a 0.18-μm SiGe BiCMOS process achieves power gain of 22.3/24.6/22.2 dB at 13.5/24.5/34.5 GHz, respectively. It has the best noise figure of 3.7/3.3/4.3 dB and IIP3 of -17.5/-18.5/-15.6 dBm in the 13.5/24.5/34.5 GHz pass-band, respectively. The tri-band LNA consumes 36 mW from a 1.8 V supply, and occupies 920 μm × 500 μm.
-
-
-
Hardware implementation of principal component analysis for gas identification systems on the Zynq SoC platform
More LessPrincipal component analysis (PCA) is a commonly used technique for data reduction in general as well as for dimensionality reduction in gas identification systems when a sensor array is being used. A complete PCA IP core for gas application has been designed and implemented on the Zynq programmable system on chip (SoC). The new heterogeneous Zynq platform with its ARM processor and programmable logic (PL) was used because it is becoming an interesting alternative to conventional field programmable gate array (FPGA) platforms. All steps of the learning and testing phases of PCA starting from the mean computation to the projection of data onto the new space, passing by the normalization process, covariance matrix and the eigenvectors computation were developed in C and synthesized using the new Xilinx VIVADO high level synthesis (HLS) tool. The eigenvectors were found using the Jacobi method. The implemented hardware of the presented PCA algorithm for a 16×30 matrix was faster than the software counterpart with a speed up of 1.41 times when executed on a desktop running a 64-bit Intel i7-3770 processor at 3.40GHz. The implemented IP core consumed an average of 23% of all on chip resources. The PCA algorithm used in the learning phase is to be executed first for the system to be trained to a specific data set and then produce the related vectors of means along with the eigenvectors that will be used in the testing part. The PCA algorithm used in the testing phase will also be used in real time identification. For testing purpose, a data set that represents the output of a 16-array gas sensor when exposed to three types of gases (CO, Ethanol and H2) in ten different concentrations (20, 40, 60, 80, 120, 140, 160, 180 and 200ppm) was used. The aim was to reduce the 30 samples of 16 dimensions to 30 vectors of 2 or 3 dimensions data depending on the need. The combination between the Zynq platform and the HLS tool showed many benefits, using Vivado HLS resulted in a considerable gain in terms of time spent on prototyping and this is due to the fact that the design was specified in a high level language such C or C++ and not a hardware description language such as VHDL or Verilog. While using the Zynq platform highlighted some interesting advantages compared with conventional FPGA platforms such us the possibility to split the design, executing the simple part in a software manner on the ARM processor and leaving the complex one for hardware acceleration. It is planned to further optimize the IP core using the Vivado HLS directives, the developed core is to be used in a larger gas identification system for dimensionality reduction purpose. The larger gas identification system will be used to identify a given gas and estimate its concentration and will be part of a Low Power Reconfigurable self-calibrated Multi‐Sensing Platform.
-
-
-
Sonar placement and deployment in a maritime environment
By Selim BoraGiven a water-terrain area of interest (waterways, ports, etc.), this paper attempts to efficiently allocate underwater sonars to achieve a reasonable amount of coverage within a limited budget. Coverage is defined as the capability of sonars to detect threats. Though total coverage is desired, priority is given to the criticality/importance attached to the location of an area of interest on a grid-based system. Unlike other works in the literature, the developed model takes into consideration uncertainty inherent in the detection probability of sonars. Apart from issues of sonar reliability, underwater terrain, with its changing conditions, is bound to affect detection probabilities. While taking into consideration the specific physics of sonars in the model development, the model also adopts a hexagonal grid-based system to ensure more efficient placement of sonars. Based on an initially proposed mixed-integer program, a robust optimization model also is proposed to take care of uncertainties. With smaller scale problems, the model works adequately within a relatively short time period. However, large scale problems require extensive memory, taking much longer. As such, a heuristic is proposed as an alternative to the proposed model. Experimental results indicate the heuristic works effectively under most circumstances and performs less effectively under a few limited scenarios.
-
-
-
Cloud-based development life cycle: Software testing as service
More LessCloud computing is an emerging paradigm, which is changing the way computing resources are provisioned, accessed, utilized, maintained and managed. The SWOT analysis for the Cloud is depicted in Table 1. Cloud computing is increasingly changing the way software products and services are produced and consumed; thereby implying the need for a change in the ways, methods, tools and concepts by which these products are tested. Software testing is an important quality control activity stage within the Software Development Lifecycle. Software testing involves both function (eg. bugs) and non-functional testing (eg. regression). It verifies and validates the finished product to ensure that development effort meets up with requirements specification. This process often requires consumption of resources over a limited period of time. These resources could be costly, or, not readily available, which in turn, can have an effect on the efficiency of the testing process. Though this process is important, nevertheless it is not a business critical process because it does not contain overly sensitive business data, which makes it an ideal case for migration to the cloud. The increasing complexity and distribution of teams, applications, processes and services, along with the need for adequate testing approaches for cloud-based applications and services creates a convincing case for the need for cloud-based software testing. Cloud-based testing or Software Testing as a Service is a new way of carrying out testing as a service, using the cloud as the underlying platform to provide on-demand software testing services via the internet. Table 2 below shows a SWOT analysis for cloud-based testing, from which a comparison between traditional software testing with cloud-based testing can be made and advantages of the cloud approach can be drawn. A number of major industrial players like IBM, HP, UTest, SOASTA, Sogetti, and SauceLabs, to mention a few, now offer various cloud-based testing services which presents a lot of advantages to customers. Though cloud-based testing presents a lot of advantages and benefits over traditional testing, it cannot overly replace traditional testing because areas of testing and scenarios of testing for synergy and trade-offs exist. For example, some testing areas requiring implicit domain knowledge about the customer's business (like insurance business); or areas where hardware or software is an integral and essential part of the other and directly dependent on each other (like programmable logic controllers), may require the adoption of traditional testing practices over cloud-based testing. This represents an area for further research: developing intelligent/context aware Cloud-based testing services with the ability to recreate or mimic areas/scenarios requiring implicit domain knowledge. Furthermore, there is a lack of adequate support tools cloud-based testing services. These tools include: self-learning test case libraries and tools for measuring cloud-based testing services. This paper will present our research efforts in the area of Cloud based collaborative software development life cycle, with particular focus on the feasibility of provisioning software testing as a cloud service. This research has direct industrial implication and holds huge research and business potentials.
-
-
-
Grand Challenges For Sustainable Growth: Irish Presidency Of The Eu Council 2013 In Review
By Soha MaadThe presetation will overview the outcomes of the Irish Presidency of the EU Council for 2013 in addressing grand challenges for sustainable Growth with special emphasis on the digital (IT) agenda for Europe. The Irish Presidency of the EU council for 2013 made great achievements including the agreement of the 7 years EU Budget (including the budget to tackle youth unemploymen, the €70 billions Horizon 2020 program for research and innovation; the €30 billions budget for connecting Europe facility targeting enhancement in transport, energy and telecoms; and the €16 billions budget for the Erasmus programme), the reform of Common Agriculture Policy (CAP) and Common Fisheries Policy (CFP), and the brokerage of various partnerships and trade agreements (the most important ones are the EU-US agreement of €8 billions, and the EU-Japan agreement). An estimated number of 200 policy commitments were achieved including more than 80 in legislative form. The presentation will put a particular emphasis on the digital agenda for Europe and the horizon for international collaboration to tackle grand challenges for sustainable growth and the application of ICT to address these challenges. A brief overview of key related events held during the Irish Presidency of the EU council will be covered and the announcement of a book launch event elaborating on the topic and content of the presentation will be made.
-
-
-
ChiQat: An intelligent tutoring system for learning computer science
More LessFoundational topics in Computer Science (CS), such as data structures and algorithmic strategies, pose particular challenges to learners and teachers. The difficulty of learning these basic concepts often discourages students from further study, and leads them to lower success. In any discipline, students' interaction with skilled tutors is one of the most effective strategies to address the problem of weak learning. However, human tutors are not always available. Technology can compensate here: Intelligent Tutoring Systems (ITSs) are systems designed to simulate the teaching of human tutors. ITSs use artificial intelligence techniques to guide learners through problem solving exercises using pedagogical strategies similar to those employed by human tutors. ChiQat-Tutor, a novel ITS that we are currently developing, aims at facilitating learning of basic CS data structures (e.g., linked lists, trees, stacks) and algorithmic strategies (e.g., recursion). The system will use a number of effective pedagogical strategies in CS education, including positive and negative feedback, learning from worked-out examples, and learning from analogies. ChiQat will support linked lists, trees, stacks, and recursion. The ChiQat linked list module builds on iList, our previous ITS that provenly helps students learn linked lists. This module provides learners with a simulated environment where linked lists can be seen, constructed, and manipulated. Lists are represented graphically and can be manipulated interactively using programming commands (C++ or Java). The system can currently follow the solution strategy of a student, and provides personalized positive and negative feedback. Our next step is to add support for worked-out examples. The recursion module provides learners with an animated and interactive environment, where they can trace recursion calls of a given recursive problem. This is one of the most challenging tasks students face when learning the concept of recursion. In the near future, students will be aided in breaking down recursive problems into their basic blocks (base case and recursive case) through interactive dialogues. ChiQat employs a flexible, fault-tolerant, distributed plug-in architecture, where each plug-in fulfills a particular role. This configuration allows different system types to be defined, such as all-in-one applications or distributed ones. The system is composed of separate front and back ends. The back-end will house the main logic for heavy computational tasks such as problem knowledge representation and tutor feedback generation. The front-end (user interface) collects users input and sends it to the back-end, while displaying the current state of the problem. Due to this flexible architecture, it will be possible to run the system in two modes; online and offline. Offline mode will run all client and server components on the same machine, allowing the user to use the system in a closed environment. The online mode will allow running the back-end on a server as a web service which can communicate with a front-end running on the client machine. This allows wider and greater reachability for users using lower powered connected devices such as mobile devices, as well as traditional laptops and desktop computers which are connected to the Internet.
-
-
-
Logic as a ground for effective and reliable web applications
More LessUnderstanding modern query languages provides key insights for the design of secure, effective and novel web applications. With the ever expanding volume of web data, two data shapes have clearly emerged as flexible ways of representing information: trees (such as most XML documents) and graphs (such as sets of RDF triples). Web applications that process, extract and filter such input data structures often rely on query languages such as XPath and SPARQL for that purpose. This has notably triggered research initiatives such as NoSQL aimed towards a better understanding and more effective implementations of these languages. In parallel, the increasing availability of surging volumes of data urges the finding of techniques to make these languages scale in order to query data of higher orders of magnitude in size. The development of big-data-ready efficient and scalable query evaluators is challenging in several interdependent aspects: one is parallelization -- or how to evaluate a query by leveraging a cluster of machines. Another critical aspect consists in finding techniques for placing data on the cloud in a clever manner so as to limit data communication and thus diminish the global workload. In particular, one difficulty resides in optimizing data partitioning for the execution of subqueries, possibly taking into account additional information on data organization schemes (such as XML Schemas or OWL descriptions). At the same time, growing concerns about data privacy urge the development of analyzers for web data access control policies. We believe that static analysis of web query languages will play an increasingly important role especially in all the aforementioned situations. In this context, we argue that modal logic can give useful yardsticks for characterizing these languages in terms of expressive power and also in terms of complexity for the problem of query answering and for the problems of static analysis of queries. Furthermore, model-checkers and satisfiability-checkers for modal logics such as the mu-calculus can serve as a robust ground for respectively designing scalable query evaluators and powerful static analyzers.
-
-
-
On green planning and management of cellular networks in urban cities
By Zaher DawyEnergy is becoming a main concern nowadays due to the increasing demands on natural energy resources. Base stations (BS) consume up to 80% of the total energy expenditure in a cellular network. The energy-efficiency of the BSs decreases significantly at off-peak hours since the power amplifiers' energy-efficiency degrades at lower output power. Thus, power savings methods should focus on the access network level by trying to manipulate the BSs power consumption. This could be done by reducing the number of active elements (e.g., BSs) in the network for lower traffic states by switching some BSs off. In this case, network management should allow smooth transition between different network topologies based on the traffic demands. In this work, we evaluate a green radio network planning approach by jointly optimizing the number of active BSs and the BS on/off switching patterns based on the changing traffic conditions in the network in an effort to reduce the total energy consumption of the BSs. Planning is performed based on two approaches: a reactive and a proactive approach. In the proactive approach, planning will be performed starting with the lowest traffic demand until reaching the highest traffic demand whereas in the reactive approach, the reverse way is considered. Performance results are presented for various case studies and evaluated taking into account practical network planning considerations. Moreover, we present real planning results in an urban city environment using the ICS telecom tool from ATDI in order to perform coverage calculations and analysis for LTE networks.
-
-
-
Cooperative relaying for idle band integration in spectrum sharing systems
By Syed HussainRecent developments in wireless communications and the emergence of high data rate services have consumed almost all the accessible spectrum making it a very scarce radio resource. Spectrum from very low frequencies to several GHz range has been either dedicated to a particular service or licensed to its providers. It is very difficult to find sufficient bandwidth for new technologies and services within accessible spectrum range. Contrarily, studies in different parts of the world reveal that the licensed and/or dedicated spectrum is underutilized leaving unused bands at different frequencies. These idle bands; however, cannot be used by non-licensed users due to current spectrum management practices throughout the world. This fact forced the regulatory authorities and academia to rethink the spectrum allocation policies. This resulted in the idea of spectrum sharing systems, generally known as cognitive radio, in which non-licensed or secondary users can access the spectrum licensed to the primary users. Many techniques and procedures have been suggested in the recent years for smooth and transparent spectrum sharing among the primary and secondary users. The most common approach suggests that the secondary users should perform spectrum sensing to identify the unused bands and exploit them for their own transmission. However, as soon as the primary user becomes active in that band, secondary transmission should be switched off or moved to some other idle band. A major problem faced by the secondary users is that the average width of the idle bands available at different frequencies is not large enough to support high data rate wireless applications and services. A possible solution is to integrate few idle bands together to generate a larger bandwidth. This technique is also known as spectrum aggregation. Generally, it is proposed to build the transmitter with multiple radio frequency chains which are activated according to the availability of idle bands. A combiner or aggregator is then used to transmit the signal through the antenna. Similarly, a receive antenna can be realized through multiple receive RF chains through a separator or splitter. Another option is to use orthogonal frequency division multiplexing in which sub-carriers can be switched on and off based on unused and active primary bands, respectively. These solutions are developed and analyzed for direct point to point links between the nodes. In this work, we analyze spectrum aggregation for indirect links through multiple relays. We propose a simple mechanism for idle band integration in a secondary cooperative network. Few relays in the system partly facilitate the source to aggregate available idle bands and collectively all the involved relays provide an aggregated larger bandwidth for the source to destination link. We analyze two commonly used forwarding schemes at the relays; namely, amplify-and-forward and decode-and-forward. We focus on outage probability of the scheme and derive a generalized closed form expression applicable to both scenarios. We analyze the system performance under different influential factors and reveal some important trade-offs.
-
-
-
PhosphoSiteAnalyzer: Analyzing complex cell signalling networks
More LessPhosphoproteomic experiments are routinely conducted in laboratories worldwide, and because of the fast development of mass spectrometric techniques and efficient phosphopeptide enrichment methods, life-science researchers frequently end up having lists with tens of thousands of phosphorylation sites for further interrogation. To answer biologically relevant questions from these complex data sets, it becomes essential to apply computational, statistical, and predictive analytical methods. Recently we have provided an advanced bioinformatic platform termed “PhosphoSiteAnalyzer” to the scientific community to explore large phosphoproteomic data sets that have been subjected to kinase prediction using the previously published NetworKIN algorithm. NetworKIN applies sophisticated linear motif analysis and contextual network modeling to obtain kinase-substrate associations with high accuracy and sensitivity. PhosphoSiteAnalyzer provides an algorithm for retrieval of kinase predictions from the public NetworKIN webpage in a semi-automated way and applies hereafter advanced statistics to facilitate a user-tailored in-depth analysis of the phosphoproteomic data sets. The interface of the software provides a high degree of analytical flexibility and is designed to be intuitive for most users. Network biology and in particular kinase-substrate network biology provides an adequate conceptual framework to describe and understand diseases and for designing targeted biomedicine for personalized medicine. Hence network biology and network analysis are absolutely essential to translational medical research. PhosphoSiteAnalyzer is a versatile bioinformatics tool to decipher such complex networks and can be used in the fight against serious diseases such as psychological disorders, cancer and diabetes that arise as a result of dysfunctional cell signalling networks.
-
-
-
Opportunistic Cooperative Communication Using Buffer-Aided Relays
More LessSpectral efficiency of a communication system refers to the information rate that the system can transmit reliably over the available bandwidth (spectrum) of the communication channel. Enhancing the spectral efficiency is without doubt a major objective for the designers of next generation wireless systems. It is evident that the telecommunications industry is rapidly growing due to the high demands for ubiquitous connectivity and the popularity of high data rate multimedia services. As well-known, wireless channels are characterized by temporal and spectral fluctuations due to physical phenomena such as fading and shadowing. A well-established approach to exploit the variations of the fading channel is opportunistic communication, which means transmitting at high rates when the channel is good and at low rates or not at all when the channel is poor. Furthermore, in the last few years, the research focus has turned into exploiting the broadcast nature of the wireless medium and the potential gains of exploiting the interaction (cooperation) between neighboring nodes in order to enhance the overall capacity of the network. Cooperative communication will be one of the major milestones in the next decade for the emerging fourth and fifth generation wireless systems. Cooperative communication can take several forms such as relaying the information transmitted by other nodes, coordinated multi-point transmission and reception techniques, combining several information flows together using network coding in order to exploit side information available at the receiving nodes, and interference management in dense small cell networks and cognitive radio systems to magnify the useful information transmission rates. We propose to exploit all sources of capacity gains jointly. We want to benefit from old, yet powerful, and new transmission techniques. Specifically, we want to examine optimal resource allocation and multiuser scheduling in the context of the emerging network architectures that involve relaying, network coding and interference handling techniques. We like to call this theme opportunistic cooperative communication. With the aid of opportunistic cooperative communication we can jointly exploit many sources of capacity gains such as multiuser diversity, multihop diversity, the broadcast nature of the wireless medium and the side-information at the nodes. We suggest exploring opportunistic cooperative communication as the choice for future digital communications and networking. In this direction, we introduce the topic of buffer-aided relaying as an important enabling technology for opportunistic cooperative communication. The use of buffering at the relay nodes enables storing the received messages temporarily before forwarding them to the destined receivers. Therefore, buffer-aided relaying is a prerequisite in order to apply dynamic opportunistic scheduling in order to exploit the channel diversity and obtain considerable throughput gains. Furthermore, these capacity gains can be integrated with other valuable sources of capacity gains that can be obtained using, e.g., multiuser scheduling, network coding over bidirectional relays, and interference management and primary-secondary cooperation in overlay cognitive radios systems. The gains in the achievable spectral efficiency are valuable and hence they should be considered for practical implementation in next generation broadband wireless systems. Furthermore, this topic can be further exploited in other scenarios and applications.
-
-
-
Security-Smart-Seamless (Sss) Public Transportation Framework For Qatar Using Tv White Space (Tvws)
More LessThe present Qatarian government has a long term vision of introducing intelligent transport, logistics management and road safety services in Qatar. Studies have shown that the public transport system in Qatar, and Doha in particular, is developing, but is not yet as comprehensive as in many renowned world cities (Pernin et al., 2008; Henry et al., 2012). Furthermore, with hosting rights of FIFA 2022 World Cup being granted to Qatar, a seminar paper was recently discussed by 2030 Qatar National Vision aim at world-class transport system for Qatar which meets the requirements of the country's long term goal (Walker, 2013). The introduction of intelligent public transport system involves the incorporation of technology into transportation system so as to improve public safety, conserve resources through seamless transport hub and introduce smartness for maximum utility in public transportation. The aforementioned goals of 2030 Qatar National Vision can be achieved through TVWS technology. TVWS technology was created to make use of sparsely used VHF and UHF spectra bands as indicated in Figure 1. The project focuses on IEEE 802.22 to enhance the Security, Smart, Seamless (SSS) transportation system in Qatar as shown in Figure 2 below. It is sub-divided as: (i) Security- TVWS to provide surveillance camera in public bus and train system. The bus/train system will be fitted with micro-camera. The project will provide the city center management team the ability to monitor and track the public transportation system in-case of accident, terrorism and other social issues. (ii) Seamless-TVWS will be made available for anyone who can purchase a down/up converter terminal to access the internet services for free. The need for up/down converter arises because the current mobile devices operate in ISM bands, whereas TVWS operates in VHF/UHF bands. (iii) Smart-The city center management can seat in their office and take control of eventuality. If there is such a project in the past, it might be using satellite technology. We are all aware of the limitations of satellite technology such as round delay trip. (iv) Novelty- Spectrum sensing by using Grey prediction algorithm is proposed to achieve optimal result. From academic point of view, Grey prediction algorithm has been used in predicting handoff in cellular communication and in stock market prediction, with high prediction accuracy of 95~98% (Sheu and Wu, 2000; Kayacan et al. 2010). The proposal methodology is as shown in Figure 3. Wireless Rural Area Network (WRAN) with cell radius varies from 10 - 100 km leaning towards macro cell architecture. Hence, the roll-out will require less base station infrastructure. In addition, the VHF-UHF bands offer desirable propagation qualities when compared to other frequency bands thereby ensuring wider and reliable radio coverage.
-
-
-
Visualization Methods And Computational Steering Of The Electron Avalanches In The High-Energy Particle Detector Simulators
More LessThe traditional cycle in the simulation of the electron avalanches and any scientific simulation is to prepare input, execute a simulation, and to visualize the results as a post-processing step. Usually, such simulations are long running and computationally intensive. It is not unusual for a simulation to keep running for several days or even weeks. If the experiment leads to the conclusion that there is incorrect logic in the application, or input parameters were wrong, then simulation has to be restarted with correct parameters. A most common method of analyzing the simulation results is to gather the data on disk and visualize after the simulation finishes. The electron avalanche simulations can generate millions of particles that can require huge amount of disk I/O. The disk being inherently slow can become the bottleneck and can degrade the overall performance. Furthermore, these simulations are commonly run on the supercomputer. The supercomputer maintains a queue of researchers' programs and executes them as time and priorities permit. If the simulation produces incorrect results and there is a need to restart it with different input parameters, it may not be possible to restart it immediately because supercomputer is typically shared by several other researchers. The simulations (or jobs) have to wait in the queue until they are given a chance to execute again. It increases the scientific simulation cycle time and hence reduces the researcher's productivity. This research work proposes a framework to let researchers visualize the progress of their experiments so they could detect the potential errors at early stages. It will not only enhance their productivity but will also increase efficiency of the computational resources. This work focuses on the simulations of the propagation and interactions of electrons with ions in particle detectors known as Gas Electron Multipliers (GEMs). However, the proposed method is applicable to any scientific simulation from small to very large scale.
-
-
-
Kernel collaborative label power set system for multi-label classification
More LessA traditional multi-class classification system assigns each example x a single label l from a set of disjoint labels L. However, in many modern applications such as text classification, image/video categorization, music categorization etc [1, 2], each instance can be assigned to a subset of labels Y ⊆ L. In text classification, news document can cover several topics such as the name of movie, box office ratings, and/or critic reviews. In image/video categorization, multiple objects can appear in the same image/video. This problem is known as multi-label learning. Figures 1 shows some examples of the multi-label images. Collaborative Representation with regularized least square (CRC-RLS) is a state-of-the-art face recognition method that exploits this collaborative representation between classes in representing the query sample [3]. The basic idea is to code the testing sample over a dictionary, and then classify it based on the coding vector. While the benefits of collaborative representation are becoming well established for face recognition or in general multi-class classification, the corresponding use for multi-label classification needs to be investigated. In this research, a kernel collaborative label power set multi-label classifier (ML-KCR) based on regularized least square principle is proposed. ML-KCR directly introduces the discriminative information of the samples using l2-norm\sparsity" and uses the class specified representation residual for classification. Further, in order to capture co-relation among classes, the multi-label problem is transformed using label power set which is based on the concept of handling sets of labels as single labels and thus allowing the classification process to inherently take into account the correlations between labels. The proposed approach is applied to six publicly available multi-label data sets from different domains using 5 different multi-label classification measures. We validate the advocated approach experimentally and demonstrate that it yields significant performance gains when compared with the state-of-the art multi-label methods. In summary, following are our main contributions * A kernel collaborative label powerset classifier (ML-KCR) based on regularized least square principle is proposed for multi-label classification. ML-KCR directly introduces the discriminative information and aim to maximize the margins between the samples of di_erent classes in each local area. * In order to capture correlation among labels, the multi-label problem is transformed using label powerset (LP). The main disadvantage associated with LP is the complexity issue arise due to many distinct label sets. We will show that this complexity issue can be avoided using collaborative representation with regularization. * We applied the proposed approach to publicly available multi-label data sets and compared with state-of-the-art multi-label methods. The proposed EML method is compared with the state-of-the-art multi-label classifiers: RAkEL, ECC, CLR, MLkNN, IBLR [2]. References [1] Tsoumakas, G., Katakis, I., Vlahavas, I., 2009. Data Mining and Knowledge Discovery Handbook. Springer, 2nd Edition, Ch. Mining Multilabel Data. [2] Zhang, M.-L., Zhou, Z.-H., 2013. A review on multi-label learning algorithms. IEEE Transactions on Knowledge and Data Engineering (preprint). [3] Zhang, L., Yang, M., 2011. Sparse representation or collaborative representation: Which helps face recognition? In: IEEE International Conference on Computer Vision (ICCV).
-
-
-
Simultaneous estimation of multiple phase information in a digital holographic configuration
More LessThe automated and simultaneous extraction of multiple phase distributions and their derivatives continue to pose major challenges. A possible reason is the lack of proper data processing concepts to support the multiple wave mixing that needs to be introduced to make the configuration at a time sensitive to multiple phase components and yet be able to decrypt each component of the phase efficiently and robustly, in absence of any cross-talk. The paper demonstrates a phase estimation method for encoding and decoding the phase information in a digital holographic configuration. The proposed method relies on local polynomial phase approximation and subsequent state-space formulation. The polynomial approximation of phase transforms multidimensional phase extraction into a parameter estimation problem, and the state-space modeling allows the application of Kalman filtering to estimate these parameters. The prominent advantages of the method include high computational efficiency, ability to handle rapid spatial variations in the fringe amplitude, and non-requirement of two-dimensional unwrapping algorithms. The performance of the proposed method is evaluated using numerical simulation.
-
-
-
Automatic long audio alignment for conversational Arabic speech
More LessLong Audio Alignment is a known problem in speech processing in which the goal is to align a long audio input with the corresponding text. Accurate alignments help in many speech processing tasks such as audio indexing, speech recognizer's acoustic model training, audio summarizing and retrieving, etc. In this work, we have collected more than 1400 hours of conversational Arabic speech extracted from Al-Jazeerah podcasts besides the corresponding non-aligned text transcriptions. Podcast's length varies from 20-50 minutes each. Five episodes have been manually aligned that meant to be used in evaluating alignment accuracy. For each episode, a split and merge segmentation approach is applied to segment audio file into small segments of average length of 5 sec. having filled pauses on the boundary of each segment. A pre-processing stage in applied on the corresponding raw transcriptions to remove titles, headings, images, speaker's names, etc. A biased language model (LM) is trained on the fly using the processed text. Conversational Arabic speech is mostly spontaneous and influenced by dialectal Arabic. Since phonemic pronunciation modeling is not always possible for non-standard Arabic words, a graphemic pronunciation model (PM) is utilized to generate one pronunciation variant for each word. Unsupervised acoustic model adaptation in applied on a pre-trained Arabic acoustic model using the current podcast audio. The adapted AM along with the biased LM and the graphemic PM are used in a fast speech recognition pass applied on the current podcast's segments. Recognizer's output is aligned with the processed transcriptions using Levenshtein distance algorithm. This way we can ensure error recovery where miss-alignment of a certain segment does not affect alignment of later segments. The proposed approach resulted in an alignment accuracy of 97% on the evaluation set. Most of miss-alignment errors were found to be with segments having significant background noise (music, channel noise, cross-talk, etc.) or significant speech disfluencies (truncated words, repeated words, hesitations, etc.). For some speech processing tasks like acoustic model training, it is required to eliminate miss-aligned segments from the training data. That is why a confidence scoring metric is proposed to accept/reject aligner output. The score is provided for each segment and it is basically the Min-Edit distance between recognizer's output and the aligned text. By using confidence scores, it was possible to reject the majority of miss-aligned segments resulting in 99% alignment accuracy. This work was funded by a grant from the Qatar National Research Fund under its National Priorities Research Program (NPRP) award number NPRP 09-410-1-069. Reported experimental work was performed at Qatar University in collaboration with University of Illinois.
-
-
-
Simultaneous fault detection, isolation and tracking design using a single observer-based module
By Nader MeskinFault diagnosis (FD) has received much attention for complex modern automatic systems such as car, aircraft, rockets, unmanned vehicles, and so on since 1970s. In the FD research field, the diagnostic systems are often designed separately from the control algorithms, although it is highly desirable that both the control and diagnostic modules are integrated into one system module. Hence, the problem of simultaneous fault detection and control (SFDC) has attracted a lot of attention in the last two decades, both in research and application domains. The simultaneous design unifies the control and detection units into a single unit which results in less complexity as compared with the case of separate design; so, it is a reasonable approach. However, the current literature in the field of SFDC suffers from the following limitations and drawbacks. First, most of the literature that considers the problem of SFDC, can achieve the control objective of "regulation" but none of them consider the problem of "tracking" in SFDC design. Therefore, considering the problem of tracking in SFDC design methodology is of great significance and importance. Second, although most of the current references in the field of SFDC can achieve acceptable fault detection, they cannot achieve fault isolation. Hence, although there are certain published works in the field of SFDC, none of them is capable of detecting and isolating simultaneous faults in the system as well as tracking the specified reference input. In this paper, the problem of simultaneous fault detection, isolation and tracking (SFDIT) design for linear continuous-time systems is considered. An H_infty/H_index formulation of the SFDIT problem using a dynamic observer detector and state feedback controller is developed. Indeed, a single module based on dynamic observer is designed which produces two signals, namely the residual and the control signals. The SFDIT module is designed such that the effects of disturbances and reference inputs on the residual signals are minimized (for accomplishing fault detection) subject to the constraint that the transfer matrix function from the faults to the residuals is equal to a pre assigned diagonal transfer matrix (for accomplishing fault isolation), while the effects of disturbances, reference inputs and faults on the specified control output are minimized (for accomplishing fault-tolerant control and tracking problems). Sufficient conditions for solvability of the problem are obtained in terms of linear matrix inequality (LMI) feasibility conditions. On the other hand, it is shown that by applying our methodology, the computational complexity from the view point of the number and size of required observers is significantly reduced in comparison with all of the existing methodologies. Moreover, using this approach the system can not only detect and isolate the occurred faults but also able to track the specified reference input. The proposed method can also handle isolation of simultaneous faults in the system. Simulation results for an autonomous unmanned underwater vehicle (AUV) illustrate the effectiveness of our proposed design methodology.
-
-
-
Towards computational offloading in mobile device clouds
More LessWith the rise in mobile device adoption, and growth in mobile application market expected to reach $30 billion by the end of 2013, mobile user expectations for pervasive computation and data access are unbounded. Yet, various applications, such as face recognition, speech and object recognition, and natural language processing, exceed the limits of standalone mobile devices. Such applications resort to exploiting larger resources in the cloud, which sparked researching problems arising from data and computational offloading to the cloud. Research in this area has mainly focused on profiling and offloading tasks to remote cloud resources, automatically transforming mobile applications by provisioning and partitioning its execution into offloadable tasks, and more recently, bringing computational resources (e.g. Cloudlets) closer to task initiators in order to save mobile device energy. In this work, we argue for environments in which computational offloading is performed among mobile devices forming what we call a Mobile Device Cloud (MDC). Our contributions are: (1) Implementing an emulation testbed for quantifying the potential gain, in execution time or energy consumed, of offloading tasks to an MDC. This testbed includes a client offloading application, an offloadee server receiving tasks, and a traffic shaper situated between the client and server emulating different communication technologies (Bluetooth 3.0, Bluetooth 4.0, WiFi Direct, WiFi, and 3G). Our evaluation for offloading tasks with different data and computation characteristics to an MDC registers up to 80% and 90% savings in time or energy respectively, as opposed to offloading to the cloud. (2) Providing an MDC experimental platform to enable future evaluation and assessment of MDC-based solutions. We create a testbed, shown in Figure 1, to measure the energy consumed by a mobile device when running or offloading tasks using different communication technologies. We build an offloading Android-based mobile application and measure the time taken to offload tasks, execute them, and receive the results from other devices within an MDC. Our experimental results show gains in time and energy savings, up to 50% and 26% respectively, by offloading within MDCs, as opposed to locally executing tasks. (3) Providing solutions that address two major MDC challenges. First, due to mobility, offloadee devices leaving an MDC would seriously compromise performance. Therefore, we propose several social-based offloadee selection algorithms that exploit contact history between devices, as well as friendship relationships or common interests between device owners or users. Second, we provide solutions for balancing power consumption by distributing computational load across MDC members to elongate and MDC's life time. This need occurs when users need to maximize the lifetime of an ensemble of devices that belong to the same user or household. We evaluate the algorithms we propose for addressing these two challenges using the real datasets that contain contact mobility traces and social information for conference attendees over the span of three days. Our results show the impact of choosing the suitable offloadee subset, the gain from leveraging social information, and how MDCs can live longer by balancing power consumption across their members.
-
-
-
QALB: Qatar Arabic language bank
More LessAutomatic text correction has been attracting research attention for English and some other western languages. Applications for automatic text correction vary from improving language learning for humans and reducing noise in text input to natural language processing tools to correcting machine translation output for grammatical and lexical choice errors. Despite the recent focus on some Arabic language technologies, Arabic automatic correction is still a fairly understudied research problem. Modern Standard Arabic (MSA) is a morphologically and syntactically complex language, which poses multiple writing challenges not only to language learners, but also to Arabic speakers, whose dialects differ substantially from MSA. We are currently creating resources to address these challenges. Our project has two components: first is QALB (Qatar Arabic Language Bank), a large parallel corpus of Arabic sentences and their corrections, and second is ACLE (Automatic Correction of Language Errors), an Arabic text correction system trained and tested on the QALB corpus. The QALB corpus is unique in that: a) it will be the largest Arabic text correction corpus available, spanning two million words; b) it will cover errors produced by native-speakers, non-native speakers, and machine translation systems; and c) it will contain a trace of all the actions performed by the human annotators to achieve the final correction. This presentation describes the creation of two major components of the project: the web-based annotation interface and the annotation guidelines. QAWI (QALB Annotation Web Interface) is our web-based, language-independent annotation framework used for manual correction of the QALB corpus. Our framework provides intuitive interfaces for annotating text, managing a large number of human annotators and performing quality control. Our annotation interface, in particular, provides a novel token-based editing model for correcting Arabic text that allows us to reliably track all modifications. We demonstrate details of both the annotation and the administration interfaces as well as the back-end engine. Furthermore, we show how this framework is able to speed up the annotation process by employing automated annotators to correct basic Arabic spelling errors. We also discuss the evolution of our annotation guidelines from its early developments through its actual usage for group annotation. The guidelines cover a variety of linguistic phenomena, from spelling errors to dialectal variations and grammatical considerations. The guidelines also include a large number of examples to help annotators understand the general principles behind the correction rules and not simply memorize them. The guidelines were written in parallel to the development of our web-based annotation interface and involved several iterations and revisions. We periodically provided new training sessions to the annotators and measured their inter-annotator agreement. Furthermore, the guidelines were updated and extended using feedback from the annotators and the inter-annotator agreement evaluations. This project is supported by the National Priority Research Program (NPRP grant 4-1058-1-168) of the Qatar National Research Fund (a member of the Qatar Foundation). The statements made herein are solely the responsibility of the authors.
-
-
-
Pfmt-Dnclcpa Theory For Ballistic Spin Wave Transport Across Iron-Gadolinium Nanojunctions Presenting Structural Interfacial Disorder Between Iron Leads
More LessIt is widely accepted at present that the electronics based information-processing technology has fundamental limitations. A promising alternative to electronic excitations are the spin waves on magnetically ordered systems, which usher a potentially powerful solution towards fabricating devices that transmit and process information (Khitun and Wang 2006). This approach to information-processing technology, known as magnonics, is rapidly growing (Kruglyak and Hicken 2006, Choi et al. 2007), and key magnonic components such as wave guides, emitters, nanojunctions and ï¬lters (Khater et al. 2011) are currently explored as basic elements of magnonic circuitry. This paper deals with the theory for ballistic spin wave transport across ultrathin iron-gadolinium nanojunctions, ..-Fe] [Gd]nML [Fe-.. , which are known to present structural interfacial disorder; n is the number of gadolinium monoatomic planes between the iron leads. It is shown that our PFMT-DNCLCPA theory gives a detailed and complete analysis for the properties of the ballistic transmission, and the corresponding reflection and absorption spectra across the structurally disordered nanojunction. We have developed the dynamic non-local coherent phase approximation (DNLCPA), and the phase field matching theory (PFMT) methods (Ghader and Khater 2013), and fully integrate them to study the ballistic spin wave transport across such nanojunctions. The DNCLPA method yields a full description of the dynamics of the spin wave excitations localized on the nanojunction, and their corresponding life-times and local density of states. These are excitations propagating laterally in the nanojunction atomic planes with finite life-times, but their fields are localized along the direction normal to the nanojunction. Moreover, the calculations determine the reflection, transmission, and absorption spectra for the spin waves incident at any arbitrary angle from the iron leads onto the nanojunction. The PFMT-DNCLCPA calculated results vary with nanojunction thickness. In particular, the normal incidence transmission spectra present no absorption effects and resonance assisted maxima are identified, notably at low frequencies at microscopic and submicroscopic wavelengths, which shift to lower frequencies with increasing nanojunction thickness. The results render these systems interesting for potential applications in magnonic circuitry. Fig.1 Calculated DNLCPA-PFMT reflection and transmission spectra for spin waves at normal incidence from the iron leads onto the magnetic ..-Fe] [Gd]3ML [Fe-.. nanojunction, as a function of the spin wave energies in units J(Fe-Fe)S(Fe) of the iron exchange and its spin. Note the transmission assisted maxima. Fig.2 Calculated absorption spectra for obliquely incident spin waves at the nanojunction cited in Fig.1, due to its structural interfacial disorder. Acknowledgements: The authors acknowledge QNRF financial support for the NPRP 4-184-1-035 project. References - S. Choi, K.S. Lee, K.Y. Guslienko, S.K. Kim, Phys. Rev. Lett. 98, 087205 (2007) - D. Ghader and A. Khater, to be published (2013) - A. Khater, B. Bourahla, M. Abou Ghantous, R. Tigrine, R. Chadli, Eur. Phys. J. B: Cond. Matter 82, 53 (2011) - A. Khitun and K. L. Wang, Proceedings of the Third International Conference on Information Technology, New Generations ITNG, 747 (2006) - V.V. Kruglyak and R.J. Hicken, J. Magn. Magn. Mater. 306, 191 (2006)
-
-
-
The Gas Electron Multiplier For Charged Particle Detection
By Maya Abi AklThe Gas Electron Multiplier (GEM) has emerged as a promising tool for charged particle detection. It is being developed as a candidate detection system for muon particles for the future upgrade of Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC). It consists of a thin polymer foil, metal coated on each side and pierced by a high density of holes (see figure). The potential difference between the electrodes and the high electric field generated by the holes will further amplify the electrons released in the gas of the detector by the ionizing radiation or the charged particle crossing the detector. In this work, we will report on the results of the performance of the GEM prototype at the tests conducted at the CERN acceleration facilities using pion and muon beams. The main issues under study are efficiency, gain uniformity, and spatial resolution of the detector.
-
-
-
Dynamic Non-Local Coherent Phase Approximation (Dnlcpa) Model For Spin Wave Dynamics In Ultrathin Magnetic Fe-Gd Films Presenting Interfacial Structural Disorder
More LessIt is widely believed in the semiconductor community that the progress of the electronics based information technology is coming to an end (ITRS 2007), owing to fundamental electronic limitations. A promising alternative to electrons is the use of spin wave excitations. This has ushered a potentially powerful solution towards fabricating devices that use these excitations to transmit and process information (Khitun and Wang 2006). This new approach to information-processing technology, known as magnonics, is rapidly growing (Kruglyak and Hicken 2006), and key magnonic components such as spin wave guides, emitters, and ï¬lters are currently explored (Choi et al. 2007). The first working spin wave based logic device has been experimentally demonstrated by Kostylev et al (2005). In the present paper we develop and apply a model to analyze the spin dynamics for iron-gadolinium films of a few Gd(0001) atomic planes between two Fe(110) atomic planes. These ultrathin systems may be deposited layer by layer on a nonmagnetic substrate using techniques like dc-sputtering or pulsed laser deposition. They constitute prototypes for iron-gadolinium nanojunctions between iron leads in magnonics. In this system the Fe/Gd interfaces present structural disorder due to the mismatch between the Fe_bcc and Gd_hcp lattices. This engenders a quasi-infinite ensemble of Fe-Gd cluster configurations at the interface. In the absence of DFT or ab initio results for the magnetic Fe-Gd exchange, we have developed an integration based analytic approach to determine the spatial dependence of this exchange using available experimental data from the corresponding multilayer systems. A dynamic non-local CPA method is also developed to analyze the spin dynamics for the disordered systems. This DNLCPA introduces the idea of a scattering potential built up from the phase matching of the spin dynamics on structurally disordered Fe-Gd interface clusters with the spin dynamics of the virtual crystal. This method accounts properly for the quasi-infinite ensemble of interfacial structural configurations, and yields the configurationally averaged Green's function for the disordered system. The computations yield the spin wave eigenmodes, their energies, life-times, and local densities of states, for any given thickness of the ultrathin magnetic Fe-Gd film. Fig.1 DNLCPA calculated dispersion branches for the spin waves propagating in the ultrathin magnetic 1Fe-5Gd-1Fe film (7 atomic planes) presenting structural interfacial disorder. The normalized energies are in units of iron exchange and spin J(Fe-Fe)S(Fe). The curves are plotted as a function of the y-component of the wave-vector (inverse angstroms), the z-component = 0, in both figures. Fig.2 DNLCPA calculated life-times in picoseconds of the spin waves propagating in the ultrathin magnetic 1Fe-5Gd-1Fe film. Acknowledgements: The authors acknowledge QNRF financial support for the NPRP 4-184-1-035 project. References - S. Choi, K.S. Lee, K.Y. Guslienko, S.K. Kim, Phys. Rev. Lett. 98, 087205 (2007) - A. Khitun and K. L. Wang, Proceedings of the Third International Conference on Information Technology, New Generations ITNG, 747 (2006) - M.P. Kostylev, A.A. Serga, T. Schneider, B. Leven, B. Hillebrands, Appl. Phys. Lett. 87 153501 (2005) - V.V. Kruglyak and R.J. Hicken, J. Magn. Magn. Mater. 306, 191 (2006)
-
-
-
Quantum Imaging: Fundamentals And Promises
More LessQuantum imaging can be defined as an area of quantum optics that investigates the ultimate performance limits of optical imaging allowed by the quantum nature of light. Quantum Imaging techniques possess a high potential for improving the performance in recording, storage, and readout of optical images beyond the limits set by the standard quantum level of fluctuations known as the shot noise. This talk aims at giving an overview of the fundamentals of Quantum Imaging as well as its most important directions. We shall discuss generation of the spatially multimode squeezed states of light be means of a travelling-wave optical parametric amplifier. We shall demonstrate that this kind of light allows us to reduce the spatial fluctuations of the photocurrent in a properly chosen homodyne detection scheme with a highly efficient CCD camera. It will be shown that using the amplified quadrature of the light wave in a travelling-wave optical parametric amplifier, one can perform noiseless amplification of optical images. We shall provide recent experimental results demonstrating a single-shot noiseless image amplification by a pulsed optical parametric amplifier. One of important experimental achievements of Quantum Imaging, coined in the literature as quantum laser pointer, is a precise measurement of position and transverse displacement of a laser beam with resolution beyond the limit imposed by the shot noise. We shall describe briefly the idea of the experiment in which the transverse displacement of a laser beam was measured with resolution of the order of Angstrom. The problem of precise measurement of transverse displacement of a light beam brings us to a more general question about the quantum limits of optical resolution. The classical resolution criterion, derived in the nineteen century by Abbe and Rayleigh, states that the resolution of an optical instrument is limited by diffraction and is related to the wavelength of the light used in the imaging scheme. However, it was for a long time recognised in the literature that in some cases when one has some a priori information about the object, one can improve the resolution beyond the Rayleigh limit using the so-called super-resolution techniques. As a final topic in this talk, we shall discuss the quantum limits of optical super-resolution. In particular, we shall formulate the standard quantum limit of super-resolution and demonstrate how one can go beyond this limit using specially designed multimode squeezed light.
-
-
-
Computation Of Magnetizations Of …Co][Co_(1-C)Gd_C]_L ][Co]_L [Co_(1-C)Gd_C ]_L [Co… Nanojunctions Between Co Leads, And Of Their Spin Wave Transport Properties
More LessUsing the effective field theory (EFT) and mean field theory (MFT), we investigate the magnetic properties of the nanojunction system...Co][Co_(1-c)Gd_c]_l ][Co]_l [Co_(1-c)Gd_c ]_l [Co... between Co leads. The amorphous ][Co_(1-c)Gd_c]_l ] composite nanomaterial is modeled as a homogeneous random alloy of concentrations "c" on an hcp crystal lattice, and "l" is the number of its corresponding hcp (0001) atomic planes. In particular, EFT determines the appropriate exchange constants for Co and Gd by computing their single-atom spin correlations and magnetizations, in good agreement with experimental data in the ordered phase. The EFT results for the Co magnetization in the leads serve to seed the MFT calculations for the nanojunction from the interfaces inward. This combined analysis yields the sublattice magnetizations for Co and Gd sites, and compensation effects, on the individual nanojunction atomic planes, as a function of the alloy concentration, temperature and nanojunction thicknesses. We observe that the magnetic variables are different for the first few atomic planes near the nanojunction interfaces, but tend to limiting solutions in the core planes. The EFT and MFT calculated exchange constants and sublattice magnetizations are necessary elements for the computation of the spin dynamics for this nanojunction system, using a quasi-classical treatment over the spin precession amplitudes at temperatures distant from the critical temperatures of $Co_{1-c}Gd_c$ alloy. The full analysis in the virtual crystal approximation (VCA) over the magnetic ground state of the system yields both the spin waves (SWs) localized on the nanojunction, and also the ballistic scattering and transport across the nanojunction for SWs incident from the leads by applying the phase field matching theory (PFMT). The model results demonstrate the possibility of resonance assisted maxima for the SW transmission spectra owing to interactions between the incident SWs and the localized spin resonances on the nanojunction. The spectral transmission results for low frequency spin waves are of specific interest for experimentalist, because these lower frequency SWs correspond to submicroscopic wavelengths which are of present interest in experimental magnonics research and the VCA is increasingly valid as a model approximation for such frequencies. Fig.1: Calculated spin variables sigma_Co and sigma_Gd, in the first layer of [Co_(1-c}Gd_(c)]_2[Co]_2[Co_(1-c}Gd_(c)]_2 layer nanojunction as a function of kT in meV. Fig.2: The total reflection R and transmission T cross sections associated with the scattering at the nanojunction for the cobalt leads SW modes 1, 2, with the selected choices of incident angle (phi_z, phi_y). Acknowledgements: The authors acknowledge QNRF financial support for the NPRP 4-184-1-035 project.
-
-
-
In investigation into the optimal usage of social media networks in international collaborative supply chains
More LessSocial Media Networks (SMNs) are collaborative tools used in an increasing rate in many business and industrial environments. These are often used in parallel with dedicated Collaborative Technologies (CTs) which are specifically designed to handle dedicated tasks. Within this research project the specific area of supply chain management is the focus of investigation. Two case studies where CTs are already extensively employed have been conducted to evaluate the scope of SMN usage and to confirm the particular benefits provided and to identify limitations. The overall purpose being to provide guidelines on joint CT/SMN deployment in developing supply chains. The application of SMNs to supply chain operations is fundamental to addressing the emerging need for increased P2P (peer-to-peer) type communication. This type of communication is between individuals and is typified by increased relationship type interaction. This is in contrast to traditional B2B (business-to-business) communication which is typically conducted on a transactional basis especially where it is confined by the characteristics of dedicated CTs. SMNs can be applied in supply chain networks to deal with unexpected circumstances or problem solving, capture market demands and customer feedback, and in general provide a medium to react to unplanned events. Crucially, they provide a platform where the issues can be addressed on a P2P basis in the absence of confrontational, transactional type interactions. The case studies reported in this paper concern EU based companies, one being a major national aluminium profile extruder, the second being a bottling plant for a global soft drinks manufacturer. In both cases the application of CTs to their supply chains is well established. However whilst both companies could readily identify the strengths of their CT systems (information and data sharing, data storage and retrieval) they could also identify limitations. These limitations included the lack of real time interaction at a P2P level and, interestingly, the lack of a common language used between different CT systems in B2B communication. Overall, the comments of the case study companies was that the SMN provided valuable adjuncts to existing CT systems, but that the SMNs were not integrated with the CT systems. There was a strongly perceived need for a better understanding of the contrasting and complementary capabilities of CTs and SMNs so that in future fully integrated systems could be implemented. Future work in this area will focus on the development of guidelines and procedures for achieving such complementarity in international collaborative supply chains.
-
-
-
Middleware architecture for cloud-based services using software defined networking (SDN)
By Raj JainIn modern enterprise and Internet-based application environments, a separate middlebox infrastructure for providing application delivery services such as security (e.g., firewalls, intrusion detection), performance (e.g., SSL off loaders), and scaling (e.g., load balancers) is deployed. However, there is no explicit support for middleboxes in the original Internet design; forcing datacenter administrators to accommodate middleboxes through ad-hoc and error-prone network configuration techniques. Given their importance and yet the ad-hoc nature of their deployment, we plan to study application delivery (in general) in the context of cloud-based application deployment environments. To fully leverage these opportunities, ASPs need to deploy a globally distributed application-level routing infrastructure to intelligently route application traffic to the right instance. But, since such an infrastructure would be extremely hard to own and mange, it is best to design a shared solution where application-level routing could be provided as a service by a third party provider having a globally distributed presence, such as an ISP. Although these requirements seem separate, they can be converged into a single abstraction for supporting application delivery in the cloud context.
-
-
-
A proposed transportation tracking system for mega construction projects using passive RFID technology
More LessThe city of Doha has witnessed a rapid change in its demographics over the past decade. The city has been thoroughly modernized, a massive change in its inhabitants culture and behavior has occurred and the need to re-develop its infrastructure has arose creating multiple mega construction projects such as the New Doha International Airport, the Doha Metro Network, and the Lusail City. A mega-project such as the new airport in Doha requires 30,000 workers on average to be on site every day. This research tested the applicability of radio frequency identification (RFID) technology in tracking and monitoring construction workers during their trip from their housing to the construction site or between construction sites. The workers tracking system developed in this research utilized the passive RFID tracking technology due to its efficient tracking capabilities, low cost, and easy maintenance. The system will be designed to monitor construction workers ridership in a safe and non-intrusive way. It will use a combination of RFID, GPS (Global Positioning System), and GPRS (General Packet Radio Service) technologies. It will enable the workers to receive instant SMS alerts when bus is within 10 minutes of the designated pick up and drop off points reducing the time the workers spend on the street. This is very critical especially in a country like Qatar where temperatures can reach 50º degrees Celsius during summer time. The system will also notify management via SMS when the workers board and alight from the bus or enters/leaves the construction site. In addition, the system will display the real-time location of the bus and the workers inside the bus at any point in time. Each construction worker is issued one or more unique RFID card(s) to carry. The card will be embedded in the cloth for each worker. As the worker's tag is detected by the reader installed in the bus upon entering or leaving the bus, the time, date and location is logged and transmitted to a secure database. It will require no action on the part of drivers or workers, other than to carry the card and will deliver the required performance without impeding the normal loading and unloading process. To explore the technical feasibility of the proposed system, a set of tests were performed in the lab. These experiments showed that the RFID tags were effective and stable enough to be used for successfully tracking and monitoring construction workers using the bus.
-
-
-
A routing protocol for smart cities: RPL robustness under study
More LessSmart cities could be defined as developed urban areas, creating sustainable economic development and high quality of life by excelling in multiple key areas such as transportation, environment, economy, living, and government. This excellence could be reached through efficiency based on the intelligent management and integrated Information and Communication Technologies (ICT). Motivations. In the near future (2030), two thirds of the world population will reside on a city, thus increasing drastically demands on city infrastructures. As a result, urbanization is becoming a crucial issue. The Internet of Things (IoT) vision foresees billions of devices to form a worldwide network of interconnected objects including computers, mobile phones, RFID tags and wireless sensors. In this study, we focus on Wireless Sensor Networks (WSNs). The WSNs are a specific technology suitable to create Smart Cities. A distributed network of intelligent sensor nodes could measure numerous parameters and communicate them wirelessly and in real-time and makes possible a more efficient management of the city. For example, the pollution concentration in each street can be monitored, water leaks can be detected or noise maps of the city obtained. The number of applications with WSNs available for smart cities is only bounded by imagination: environmental care, sustainable development, healthcare, efficient traffic management, energy supply, water management, green buildings, etc. In short, WSN could improve the quality of life in a city. Scope. However, such urban applications often use multi-hop wireless networks with high density to obtain sufficient area coverage. As a result, they need networking stacks and routing protocols that can scale with network size and density, while remaining energy-efficient and lightweight. To this end, the IETF RoLL working group has designed the IPv6 Routing Protocol for Low-Power and Lossy Networks (RPL). This paper presents experimental results on the RPL protocol. The RPL properties in terms of delivery ratio, control packet overhead, dynamics and robustness are studied. The results are obtained by several experimentations conducted on two large WSNs testbeds composed of more than 100 sensor nodes each. In this real-life scenario (high density and convergecast traffic), several intrinsic characteristics of RPL are underlined: path length stability but reduced delivery ratio and important overhead (Fig. 1). However, the routing metrics, as defined by default, favor the creation of "hubs", aggregating most of 2-hops nodes (Fig. 2). To investigate the RPL robustness, we observe its behavior when facing a sudden death of several sensors and when several sensors are redeployed. RPL shows good abilities to maintain the routing process despite such events. However, the paper highlights that this ability can be reduced if only few critical nodes fail. To the best of our knowledge, it is the first study of RPL on such large platform.
-
-
-
Optimization Model For Modern Retail Industry
More LessIn a growing market with demanding consumers, the retail industry needs decision support tools that reflect emerging practices and integrate key decisions cutting across several of its departments (e.g. marketing and operations). The current tools may not be adequate to sufficiently tackle this kind of integration complexity in relation to pricing in order to satisfy the retailing experience of the customers. Of course, it has to be understood that the retailing experience can differ from one country to another and from one culture to another. Therefore, the off-the-shelve models may not be able to capture the behavior of customers in a particular region. This research aims at developing novel optimization mixed-integer linear/nonlinear formulations with extensive analytical and computational studies based on the experience of a large retailer in Qatar. The model addresses a product lines of substitutable items serving the same customer need but differ by secondary attributes. The research done in this project demonstrates that there is added value in identifying the shopping characteristics of the consumer base and well studying the consumer behavior in order to develop the appropriate retail analytics solution. The research is supported by a grant obtained through the NPRP project of Qatar Foundation.
-
-
-
My Method In Teaching Chemistry And Its Application Using Modern Technology
By Eman ShamsSince I studied chemical education chemistry in a very unique way at Oklahoma State University, Oklahoma , United States Of America. teaching chemistry became a hobby not a job for me. which motivated me to apply the teaching with educational technology to every chemistry aspect that I teach as I applied my unique expertise through the smart digital flipped classroom technique in teaching the different chemistry laboratories, on the blackboard and in the different chemistry tutorial classes that I taught offered by the chemistry Department, college of arts and sciences , Qatar University. Through building An educational web site with the theme chemistry flipped for teaching chemistry. The blended learning chemistry web site provides students with explanatory material that is augmented by audio visual simulations with technology immersed education. The general idea that students work at their own pace, receiving lectures at home via online videos or podcasts which I record and post for them so they got prepared before coming to the class. the students are able to use class time to practice what they've learned with traditional schoolwork—but with my freed up for additional one-on-one time. Students can review lessons anytime, anywhere on their personal computers and smart phones, reserving class time for in-depth discussions or doing the actual experiment and most importantly they know and understand what they are doing. Through unlocking knowledge and empowering minds. The video lecture is often seen as the key ingredient in the flipped approach. My Talk will be on the use of educational technology in teaching chemistry as my chem Demos go behind the magic as I used new techniques for helping students visualize the concept which helped them understand the topic more. I converted the writing discussion into a conversation into the cloud system. The web site includes online interactive pre/post-classroom activities assessment pages, live podcast capture for the experiments done by the students, post-classroom activities, dynamic quizzes and practice exams pages, honor pages for recognizing the hard working students, in class lecture live podcasting, linked experiments and many more. Probabilistic system analysis is used for keeping track of the students' progress, their access and their learn and I used statistic to relate the students results before and after the use of my blended learning audiovisual simulation flipped classroom. During the study the students showed an extraordinary passion to chemistry , they study it on iTunes, YouTube, on Facebook, on Twitter, they learned it very well, chemistry with them everywhere even I their free time. demonstrate the advantages associated with using web based learning with the flipped chemistry teaching methods provide support for students in a large lecture and laboratory classes. The two biggest changes resulting from the method are the manner in which content is delivered (outside of class) and the way students spend time in the classroom (putting principles into practice). Feedback from students has conveyed that the style is more dynamic and motivational than traditional, passive teaching. Which help keep open courseware going and growing.
-
-
-
High order spectral symplectic methods for solving PDEs on GPU
More LessEfficient and accurate numerical solving of partial differential equations (PDE) is essential for many problems in science and engineering. In this paper we discuss spectral symplectic methods with different numerical accuracy on example of Nonlinear Schrodinger Equation (NLSE), which can be taken as a model for versatile kinds of conservative systems. First, second and fourth order approximation have been observed and reviewed considering execution speed vs. accuracy trade off. In order to utilize the possibility of modern hardware, the numerical algorithms are implemented both on CPU and GPU. Results are compared in sense of execution speed, single/double precision, data transfer and hardware specifications.
-
-
-
The Arabic ontology
More LessWe overview the Arabic Ontology, an ongoing project at Sina Institute, at Birzeit University, Palestine. The Arabic Ontology is a linguistic ontology that represents the meanings (i.e., concepts) of Arabic terms using formal semantic relationships, such as SubtypeOf and PartOf. In this way, the ontology becomes a tree (i.e., classification) of meanings of the Arabic terms. To build this ontology (see Fig.1), the set of all Arabic terms are collected; then for each term, the set of its concepts (polysemy) are identified using unique numbers and described using glosses. Terms referring to same meaning (called synsets) are given the same concept identifier. These concepts are then classified using Subsumption and Parenthood relationships. The Arabic Ontology follows the same design as WordNet (i.e., network of synsets), thus it can be used as an Arabic WordNet. However, unlike WordNet, the Arabic Ontology is logically and philosophically well-founded, following strict ontological principles. The Subsumption relation is a formal subset relation. The ontological correctness of a relation (e.g., whether "PeriodicTable SubtypeOf Table" is true in reality) in WordNet is based on whether native speakers accept such a claim. However, the ontological correctness of the Arabic Ontology is based on what scientists accept; but if it can't be determined by science, then what philosophers accept; and if philosophy doesn't have an answer then we refer to what linguistics accept. Our classification also follows the OntoClean methodology when dealing with, instances, concepts, types, roles, and parts. As described in the next section, the top levels of the Arabic Ontology are derived from philosophical notions, which further govern the ontological correctness of its lower levels. Moreover, glosses are formulated using strict ontological rules focusing on intrinsic properties. Figure 1. Illustration of terms' concepts and its conceptual relations Why the Arabic Ontology It can be used in many application scenarios such as: (1) information search and retrieval, to enrich queries and improve the results' quality, i.e., meaningful search rather than string-matching search; (2) Machine translation and term disambiguation, by finding the exact mapping of concepts across languages, as the Arabic Ontology is also mapped to the English WordNet; (3) Data Integration and interoperability in which the Arabic Ontology can be used as a semantic reference to several autonomous information systems; (4) Semantic web and web 3.0, by using the Arabic Ontology as a semantic reference to disambiguate meanings used in web sites; (5) Conceptual dictionary, allowing people to easily browse and find meanings and the differences between meanings. The Arabic Ontology Top Levels Figure 2 presents the top levels of the Arabic Ontology, which is a classification of the most abstract concepts (i.e., meanings) of the Arabic terms. Only three levels are presented below for the sake of brevity. All concepts in the Arabic Ontology are classified under these top-levels. We designed these concepts after a deep investigation of the philosophy literature and based on well-recognized upper level ontologies like BFO, DOLCE, SUMO, and KYOTO. Figure 2. The Top three levels of the Arabic Ontology (Alpha Version)
-
-
-
Surface properties of poly(imide-CO-siloxane) block copolymers
By Igor NovákSurface Properties Of Poly(Imide-Co-Siloxane) Block Copolymers Aigor Novák, A,Banton Popelka, Cpetr Sysel, A,D Igor Krupa, Aivan Chodák, Amarian Valentin, Ajozef Prachár, Evladimír Vanko Apolymer Institute, Slovak Academy Of Sciences, Bratislava, Slovakia Bcenter For Advanced Materials, Quatar University, Doha, Quatar Cdepartment Of Polymers, Institute Of Chemical Technology, Faculty Of Chemical Technology, Prague, Czech Republic Dcenter For Advanced Materials, Qapco Polymer Chair, Quatar University, Doha, Quatar Evipo, Prtizánske, Slovakia Abstract Polyimides Present An Important Class Of Polymers, Necessary In Microelectronics, Printed Circuits Construction, And Aerospace Investigation, Mainly Because Their High Thermal Stability And Good Dielectric Properties. In The Last Years, Several Sorts Of Block Polyimide Based Copolymers, Namely Poly(Imide-Co-Siloxane) (Pis) Block Copolymers Containing Siloxane Blocks In Their Polymer Backbone Have Been Investigated. In Comparison With Pure Polyimides The Pis Block Copolymers Possess Some Improvements, E.G. Enhanced Solubility, Low Moisture Sorption, And Their Surface Reaches The Higher Degree Of Hydrophobicity Already At Low Content Of Polysiloxane In Pis Copolymer. This Kind Of The Block Copolymers Are Used As High-Performance Adhesives And Coatings. The Surface As Well As Adhesive Properties Of Pis Block Copolymers Depends On The Content And Length Of Siloxane Blocks. The Surface Properties Of Pis Block Copolymers Are Strongly Influenced By Enrichment Of The Surface With Siloxane Segments. Micro Phase Separation Of Pis Block Copolymers Occurs Due To The Dissimilarity Between The Chemical Structures Of Siloxane, And Imide Blocks Even At Relatively Low Lengths Of The Blocks. The Surface Energy Of Pis Block Copolymer Decreases Significantly With The Concentration Of Siloxane From 46.0 Mj.M-2 (Pure Polyimide) To 34.2 Mj.M-2 (10 Wt.% Of Siloxane), And To 30.2 Mj.M-2 (30 Wt.% Of Siloxane). The Polar Component Of The Surface Energy Reached The Value 22.4 Mj.M-2 (Pure Polyimide), Which Decreases With Content Of Siloxane In Pis Copolymer To 4.6 Mj.M-2 (10 Wt.% Of Siloxane) And 0.8 Mj.M-2 (30 Wt.% Of Siloxane) The Decline Of The Surface Energy, And Its Polar Component Of Pis Block Copolymer With Raising Siloxane Content Are Very Intense Mainly Between 0 And 10 Wt.% Of Siloxane In Copolymer. In The Case Of Further Increase Of Siloxane Concentration (Above 20 Wt.% Of Siloxane), The Surface Energy Of Pis Copolymer, And Its Polar Component Is Leveled Off. The Dependence Of Peel Strength Of Adhesive Joint Pis Copolymer-Epoxy Versus Polar Fraction Of The Copolymer Shows, That The Steepest Gradient Is Reached At 15 Wt.% Of Siloxane Pis Block Copolymer, And Then It Is Leveled Off. This Relation Allows The Determination Of The Non-Linear Relationship Between Adhesion Properties Of Pis Block Copolymer And Polar Fraction Of The Copolymer. Acknowledgement This Contribution Was Supported By Project No. 26220220091 By The "Research & Development Operational Program" Funded By The Erdf, And As A Part Of The Project „Application Of Knowledge-Based Methods In Designing Manufacturing Systems And Materials", Project Mesrssr No. 3933/2010-11, And Project Vega 2/0185/10.
-
-
-
An Arabic edutainment system: Using multimedia and physical activity to enhance the cognitive experience for children with intellectual disabilities
More LessIncreasing attention has recently been drawn in the human-computer interaction community towards the design and development of accessible computer applications for children and youth with developmental or cognitive impairments. Due to better healthcare and assistive technology, the quality of life for children with intellectual disability (ID) has evidently been improved. Many children with ID often have cognitive disabilities, along with overweight problems due to lack of physical activity. This paper introduces an edutainment system specifically designed to help these children have an enhanced and enjoyable learning process, and addresses the need for integrating physical activity into their daily lives. These games are developed with the following pedagogical model in mind: A combination of Mayer's Cognitive Theory of Multimedia Learning with a mild implementation of Skinner's Operant Condition, incorporated with physical activity as part of the learning process. The system proposed consists of a padded floor mat that consists of sixteen square tiles supported by sensors, which are used to interact with a number of software games specifically designed to suit the mental needs of children with ID. The games consist of multimedia technology with a tangible user interface. The edutainment system consists of three games, each with three difficulty levels meant to suit specific needs of different children. The system aims at enhancing the learning, coordination and memorization skills of the children with ID while involving them into physical activities, thus offering both mental and physical benefits. The edutainment system was tested on 100 children with different IDs, half of which have Down syndrome (DS. The children pertain to three disability levels; mildly, moderately and severely disabled. The obtained results depict a high increase in the learning process as the children became more proactive in the classrooms. Assessment methodology took into account the following constraints; disability type, disability level, gender, scoring, timing, motivation, coordination, acceptance levels and relative performance. The following groups, when compared with other groups, achieved best results in terms of scores and coordination: children with DS, mildly disabled children and females. In contrast to children with other IDs, moderately and severely disabled children, and males performed with lower scores and coordination levels, but all the above mentioned groups exhibited high motivation levels. Rejection rate was found to be very low, at 3% of children refusing to participate. When children repeated the games, 92% were noted to achieve significantly higher results. The edutainment is developed with the following aims: helping children with ID have an enhanced cognitive experience, allowing them a learning environment where they can interact with the game and exert physical activities, ensuring a proactive role for all in the classroom, boosting motivation, coordination and memory levels. Results proved that the system had very positive effects on the children, in terms of cognition and motivational levels. Instructors also expressed willingness to incorporate the edutainment system into the classroom on a daily basis, as a complementary tool to conventional learning.
-
-
-
Enhancement of multispectral face recognition in unconstrained environment using regularized linear discriminant analysis (LDA)
More LessIn this paper, Face recognition in unconstrained illumination conditions is investigated. A two manifold contribution is proposed: 1) Firstly, Three sate of the art algorithms, namely Multiblock local Binary pattern (MBLBP), Histogram of Gabor Phase Patterns (HGPP) and Local Gabor binary pattern histogram sequence (LGBPHS) are challenged against The IRIS-M3 multispectral face data base. 2) Secondly, The Performance of the three mentioned algorithms, being drastically decreased due to the non monotonic illumination variation that distinguish the IRIS-M3 face database, This performance is enhanced using multispectral images (MI) captured in the visible spectrum. The use of MI images like near infrared images(NIR), short wave infrared images images (SWIR) or even visible images captured at different wavelengths rather then the usual RGB spectrum, is getting more and more the trust of researcher to solve problems related to uncontrolled imaging conditions that usually affect real world application like securing areas with valuable assets, controlling high traffic borders or law enforcement. However, one weakness of MI is that they may significantly increase the system processing time due to the huge quantity of data to mine (in some cases thousands of MI are captured for each subject). To solve this issue, we proposed to select the optimal spectral bands (channels) for face recognition. Best spectral bands selection will be achieved using linear discriminant analysis (LDA) to increase data variance between images of different subjects(between class variance) while decreasing the variance between images of the same subject(within class variance). To avoid the problem of data overfitting that generally characterize LDA technique, we propose to include a regularization constraints that reduce the solution space of the chosen best spectral bands. Obtained results highlighted further the still challenging problem of face recognition in conditions with high illumination variation, as well as proven the effectiveness of our multispectral images based approach to increase the accuracy of the studied algorithms namely MBLBP, HGPP and LGBPHS of at least 9% upon the proposed database.
-
-
-
Caloric expenditure estimation for health games
By Aiman ErbadWith the decline in physical activities among young people, it is essential to monitor their physical activity and ensure their calorie expenditure is within the range necessary to lead a healthy life style. For many children and young adults, video gaming is one favorable venue for physical activities. A new flavor of video games on popular platforms, such as Wii and Xbox, aim to improve the health of young adults through competing in games which require players to perform various physical activities. These popular platforms detect the user movements, and through an avatar in the game, players can be part of the game activities, such as boxing, playing tennis, dancing, avoiding obstacles, etc. Early studies used self-administered questionnaires or interviews to collect data about the patient's activities. These self-reporting methods ask participants to report their activities on hourly, daily, or weekly basis. But self-reporting techniques suffer from a number of limitations, such as inconvenience in entering data, poor compliance, and inaccuracy due to bias or poor memory. Reporting activities is a sensitive task for overweight/obese individuals with research evidence showing that they tend overestimate the calories they burn. Having a tool to help estimate calories consumption is therefore becoming essential to manage obesity and over-weight issues. We propose providing a calories expenditure estimation service. This service would augment the treatment provided by an obesity clinic or a personal trainer for obese children. Our energy expenditure estimation system consists of two main components: activity recognition and calories estimation. Activity recognition systems have three main components: low-level sensing module to gather sensor data, feature selection module to process raw sensor data and select the features necessary to recognize activities, and a classification module to infer the activity from the captured features. Using the activity type, we can estimate the calories consumption using existing models on energy expenditure developed on the gold standard of respiratory gases measurements. We choose Kinetic as our test platform. The natural user interface in Kinect is the low-level sensing module providing skeleton tracking data. The skeleton positions will be the raw input to our activity recognition module. From this raw data, we define the features that help the classifier, such as the speed of the hands and legs, body orientation, and rate of change in the vertical and horizontal position. These are some features that can be quantified and passed periodically (e.g., every 5 seconds) to the classifier to distinguish between different activities. Other important features might need more processing, such as the standard deviation, the difference between peaks (in periodic activities), and the distribution of skeleton positions. We also plan to build an index for the calories expenditure for game activities using the medical gold standard of oxygen consumption, CO2. Game activities, such as playing tennis, running, and boxing are different from the same real world activities in terms of enegy concumption and it would be useful to quantify the difference in order to answer the question of whether these “health” games are useful for weight loss.
-
-
-
Towards socially-interactive telepresence robots for the 2022 world cup
More LessThe World Cup is the most widely viewed sporting event. The total attendance is in the hundreds of thousands. The first-ever World Cup in an Arab nation will be hosted by Qatar in 2022. As the country prepares for this event, the current paper proposes telepresence robots for the World Cup. Telepresence robots refer to robots that allow a person to work remotely in another place as he or she is physically present there. For such a big event like the World Cup, we can envision that organizers need to monitor the minute-by-minute events as they occur in multiple venues. Likewise, soccer fans who will not be able to attend the events can be present even if they are not physically present there and without the need to travel. Telepresence robots can make the organizers and participants to “be there”. This work describes some of the author's findings involving the interactions of humans and robots. Specifically, the works describe the user's perceptions and physiological data when touch and gestures are passed over the internet. The results show that there is much potential for telepresence robots to enhance the utility and the organizers' and participants' overall experience of the 2022 World Cup.
-
-
-
Assistive technology for improved literacy among the deaf and hard of hearing
More LessWe describe an assistive technology for improved literacy among the Deaf and Hard of Hearing, that is cost-wise and accessible to deaf individuals and their families/service providers (e.g., educators), businesses which employ them or list them as customers and healthcare professionals. The technology functions as (1) A real-time translation system between Moroccan Sign Language (a visual-gestural language) and standard written Arabic. Moroccan Sign Language (MSL) is a visual/gestural language that is distinct from spoken Moroccan Arabic and Modern Standrad Arabic (SA) and has no text representation. In this context, we describe some challenges in SA-to-MSL machine translation. In Arabic, word structure is not built linearly as is the case in concatenative morphological systems, which results in a large space of of morphological variation. The language has a large degree of ambiguity in word senses, and further ambiguity attributable to a writing system that omits diacritics. (e.g. short vowels, consonant doubling, inflection marks). The lack of diacritics coupled with word order flexibility are causes of ambiguity in the syntactic structure of Arabic. The problem is compounded when translating into a visual/gestural language that has far fewer signs than words of the source language. In this presentation, we show how Natural language processing tools are integrated into the system, the architecture of the system and provide a demo of several input examples with different levels of complexity. Our Moroccan Sign Language database has currently 2000 Graphic signs and their corresponding video clips. The database extension is an ongoing process task that is done in collaboration with MSL interpreters, deaf signers and educators in Deaf schools in different regions in Morocco. (2) Instructional tool: Deaf school children, in general, have poor reading skills. It is easier for them to understand text represented in sign language than in print. Several works have demonstrated that a combination of sign language and spoken/written language can significantly improve literacy and comprehension (Singleton, Supalla, Litchfield, & Schley, 1998; Prinz, Kuntz, & Strong, 1997; Ramsey & Padden, 1998). While many assistive technologies have been created for the blind, such as hand-held scanners and screen readers, there are only a few products targeting poor readers who are deaf. An example of such technology is the iCommunicator™ which translates in real time: speech to text, speech/typed text to videos of signs, and speech/typed text to computer generated voice. This tool, however, does not generate text from scans and display them with sign graphic supports that a teacher can print, edit, and use to support reading. It also does not capture screen text. We show a set of tools aiming at improving literacy among the Deaf and Hard-of-Hearing. Our tools offer a variety of input and ouput options, including scanning, screen text transfer, sign graphics and video clips. The technology we have developed is useful to teachers, educators, Health care professionals, speech/language pathologists, etc. who have a need to support understanding of Arabic text with Moroccan Sign Language signs for purposes of literacy improvement, curriculum enhancement, or communication in emergency situations.
-
-
-
Unsupervised Arabic word segmentation and statistical machine translation
More LessWord segmentation is a necessary step for natural language processing applications, such as machine translation and parsing. In this research we focus on Arabic word segmentation to study its impact on Arabic to English translation. There are accurate word segmentation systems for Arabic, such as MADA (Habash, 2007). However, such systems usually need manually-built data and rules of the Arabic language. In this work, we look at unsupervised word segmentation systems to see how well they perform on Arabic, without relying on any linguistic information about the language. The methodology of this research can be applied to many other morphologically complex languages. We focus on three leading unsupervised word segmentation systems proposed in the literature: Morfessor (Creutz and Lagus, 2002), ParaMor (Monson, 2007), and Demberg's system (Demberg, 2007). We also use two different segmentation schemes of the state of the art MADA and compare their precision with the unsupervised systems. After training the three unsupervised segmentation systems, we apply their resulting models to segment the Arabic part of the parallel data for Arabic to English statistical machine translation (SMT) and measure its impact on translation quality. We also build segmentation models using the two schemes of MADA on SMT to compare against the baseline system. The 10-fold cross validation results indicate that unsupervised segmentation systems turn out to be usually inaccurate with a precision that is less than 40%, and hence do not help with improving SMT quality. We also observe both segmentation schemes of MADA have very high precision. We experimented with two MADA schemes. A scheme with a measured segmentation framework improved the translation accuracy. A second scheme which performs more aggressive segmentation, failed to improve SMT quality. We also provide some rule-based supervision to correct some of the errors in our best unsupervised models. While this framework performs better than the baseline unsupervised systems, it still does not outperform the baseline MT quality. We conclude that in our unsupervised framework, the noise by the unsupervised segmentation offsets the potential gains that segmentation could provide to MT. We conclude that a measured supervised word segmentation improves Arabic to English quality. In contrast aggressive and exhaustive segmentation introduces new noise to the MT framework and actually harms its quality. This publication was made possible by the generous support of the Qatar Foundation through Carnegie Mellon University's Seed Research program provided to Kemal Oflazer. The statements made herein are solely the responsibility of the authors.
-
-
-
A services-oriented infrastructure for e-science
By Syed AbidiThe study of complex multi-faceted scientific questions demand innovative computing solutions—solutions that transcend beyond the management of big data to dedicated semantics-enabled, services-driven infrastructures that can effectively aggregate, filter, process, analyze, visualize and share the cumulative scientific efforts and insights of the research community. From a technical standpoint, E-Science purports technology-enabled collaborative research platforms to (i) collect, store and share multi-modal data collected from different geographic sites, (ii) perform complex simulations and experiments using sophisticated simulation models; (iii) design complex experiments by integrating data and models and executing them as per the experiment workflow; (iv) visualize high-dimensional simulation results; and (v) aggregate and share the scientific results (Fig 1). Taking a knowledge management approach, we have developed an innovative E-Science platform— termed Platform for Ocean Knowledge Management (POKM)—that is built using innovative web-enabled services, services-oriented architecture, semantic web, workflow management and data visualization technologies. POKM offers a suite of E-Science services that allow oceanographic researchers to (a) handle large volumes of ocean and marine life data; (b) access, share, integrate and operationalize the data and simulation models; (c) visualization of data and simulation results; (d) multi-site collaborations in joint scientific research experiments; and (e) form a broad, virtual community of national and international researchers, marine resource managers, policy makers and climate change specialists. (Fig 2) The functional objective of our E-Science infrastructure is to establish an online scientific experimentation platform that supports an assortment of data/knowledge access and processing tools to allow a group of scientists to collaborate and conduct complex experiments by sharing data, models, knowledge, computing resources and expertise. Our E-Science approach is to complement data-driven approaches with domain-specific knowledge-centric models in order to establish causal, associative and taxonomic relations between (a) raw data and modeled observations; (b) observations and their causes; and (c) causes and theoretical models. This is achieved by taking a unique knowledge management approach, whereby we have exploited semantic web technologies to semantically describe the data, scientific models, knowledge artifacts and web services. The use of semantic web technologies provides a mechanism for the selection and integration of problem-specific data from large repositories. To define the functional aspects of the e-science services we have developed a services ontology that provides a semantic description of knowledge-centric e-science services. POKM is modeled along a services-oriented architecture that exposes a range of task-specific web services accessible through a web portal. The POKM architecture features five layers—Presentation Layer, Collaboration Layer, Service Composition Layer, Service Layer and Ontology Layer (Fig 3). POKM is applied to the domain of oceanography to understand our changing eco-system and its impact on marine life. POKM helps researchers investigate (a) changes in marine animal movement on time scales of days to decades; (b) coastal flooding due to changes in certain ocean parameters; (c) density of fish colonies and stocks; and (d) time-varying physical characteristics of the oceans (Fig 4 & 5). In this paper, we present the technical architecture and functional description of POKM, highlighting the various technical innovations and their applications to E-Science.
-
-
-
Informatics and technology to address common challenges in public health
By Jamie PinaHealth care systems in countries around the world are focused on improving the health of their populations. Many countries face common challenges related to capturing, structuring, sharing and acting upon various sources of information in service of this goal. Information science, in combination with information and communications technologies (ICT) such as online communities and cloud-based services, can be used to address many of the challenges encountered when developing initiatives to improve population health. This presentation by RTI International will focus on the development of the Public Health Quality Improvement Exchange (www.phqix.org), where informatics and ICT have been used to develop new approaches to public health quality improvement; a challenge common to many nations. The presentation will also identify lessons learned from this effort and the implications for Gulf Cooperation Council (GCC) countries. This presentation addresses two of Qatar's Cross-cutting Research Grand Challenges; "Managing the Transition to a Diversified, Knowledge-based Society," and "Developed modernized Integrated Health Management." The first grand challenged is addressed by our research on the use of social networks and their relationship to public health practice environments. The second is addressed through our research in the development of taxonomies that align with the expectations of public health practitioners to facilitate information sharing [1]. Health care systems aim to have the most effective practices for detecting, monitoring, and responding to communicable and chronic conditions. However, national systems may fail to identify and share lessons gained through the practices of local and regional health authorities. Challenges include having appropriate mechanisms for capturing, structuring, and sharing these lessons in uniform, cost-effective ways. The presentation will explore how a public health quality improvement exchange, where practitioners submit and share best practices through an online portal, help address these challenges. This work also demonstrates the advantages of a user-centered design process to create an online resource that can successfully accelerate learning and application of quality improvement (QI) by governmental public health agencies and their partners. Public health practitioners, at the federal, state, local and tribal levels, are actively seeking to promote the use of quality improvement to improve efficiency and effectiveness. The Public Health Quality Improvement Exchange (PHQIX) was developed to assist public health agencies and their partners in sharing their experiences with QI and to facilitate increased use of QI in public health practice. Successful online exchanges must provide compelling incentives for participation, site design that aligns with user expectations, information that is relevant to the online community and presentation that encourages use. Target audience members (beneficiaries) include public health practitioners, informatics professionals, and officials within health authorities. This discussion aims to help audience members understand how new approaches and Web-based technologies can create highly reliable and widely accessible services for critical public health capabilities including quality improvement and data sharing. 1. Pina, J., et al., Synonym-based Word Frequency Analysis to Support the Development and Presentation of a Public Health Quality Improvement Taxonomy in an Online Exchange. Stud Health Technol Inform, 2013. 192: p. 1128.
-
-
-
Development of a spontaneous large vocabulary speech recognition system for Qatari Arabic
More LessIn this work, we develop a spontaneous large vocabulary speech recognition system for Qatari Arabic (QA). A major problem with dialectal Arabic speech recognition is due to the sparsity of speech resources. So, we propose an Automatic Speech Recognition (ASR) framework to jointly use Modern Standard Arabic (MSA) data and QA data to improve acoustic and language modeling by orthographic normalization, cross-dialectal phone mapping, data sharing, and acoustic model adaptation. A wide-band speech corpus has been developed for QA. The corpus consists of 15 hours speech data collected from different TV series and talk-show programs. The corpus was manually segmented and transcribed. A QA tri-gram language model (LM) was linearly interpolated with a large MSA LM in order to decrease Out-Of-Vocabulary (OOV) rate and to improve perplexity. The vocabulary consists of 21K words extracted from the QA training set with additional 256K MSA vocabulary. The acoustic model (AM) was trained with a data pool of QA data and additional 60 hours of MSA data. In order to boost the contribution of QA data, Maximum-A-Posteriori (MAP) adaptation was applied on the resulted AM using only the QA data, effectively increasing the weight of dialectal acoustic features in the final cross-lingual model. All trainings were performed with Maximum Mutual Information Estimation (MMIE) and with Speaker Adaptive Training (SAT) applied on top of MMIE. Our proposed approach achieves more than 16% relative reduction in WER on QA testing set compared to a baseline system trained with only QA data. This work was funded by a grant from the Qatar National Research Fund under its National Priorities Research Program (NPRP) award number NPRP 09-410-1-069. Reported experimental work was performed at Qatar University in collaboration with University of Illinois.
-
-
-
On faults and faulty programs
By Ali JaouaAbstract. The concept of a fault has been introduced in the context of a comprehensive study of system dependability, and is defined as a feature of the system that causes it to fail with respect to its specification. In this paper, we argue that this definition does not enable us to localize a fault, nor to count faults, nor to define fault density. We argue that rather than defining a fault, we ought to focus on defining faulty programs (or program parts); also, we introduce inductive rules that enable us to localize faults to an arbitrary level of precision; finally, we argue that to claim that a program part is faulty one must often make an assumption about other program parts (and we find that the claim is only as valid as the assumption). Keywords. Fault, error, failure, specification, correctness, faulty program, refinement. Acknowledgement: This publication was made possible by a grant from the Qatar National Research Fund NPRP04-1109-1-174. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the QNRF.
-
-
-
Compiler-directed design of memory hierarchy for embedded systems
More LessIn embedded real-time communication and multimedia processing applications, the manipulation of large amounts of data has a major effect on both power consumption and performance of the system. Due to the significant amount of data transfers between the processing units and the large and energy consuming off-chip memories, these applications are often called data-dominated or data-intensive. Providing sufficient bandwidth to sustain fast and energy-efficient program execution is a challenge for system designers: due to the growing gap of speed between processors and memories, the performance of the whole VLSI system will mainly depend on the memory subsystem, when memory is unable to provide data and instructions at the pace required by the processor. This effect is, sometimes, referred in the literature as the memory wall problem. At system level, the power cost can be reduced by introducing an optimized custom memory hierarchy that exploits the temporal data locality. Hierarchical memory organizations reduce energy consumption by exploiting the non-uniformity of memory accesses: the reduction can be achieved by assigning the frequently-accessed data to low hierarchy levels, a problem being how to optimally assign the data to the memory layers. This hierarchical assignment diminishes the dynamic energy consumption of the memory subsystem - which expands due to memory accesses. Moreover, it diminishes the static energy consumption as well, since this decreases monotonically with the memory size. Moreover, within a given memory hierarchy level, power can be reduced by memory banking - whose principle is to divide the address space in several smaller blocks, and to map these blocks to physical memory banks that can be independently enabled and disabled. Memory partitioning is also a performance-oriented optimization strategy, because of the reduced latency due to accessing smaller memory blocks. Arbitrarily fine partitioning is prevented since an excessively large number of small banks is area inefficient, imposing a severe wiring overhead -- which increases communication power and decreases performance. This presentation will introduce an electronic design automation (EDA) methodology for the design of hierarchical memory architectures in embedded data-intensive applications, mainly in the area of multidimensional signal processing. The input of this memory management framework is the behavioral specifications of the applications, that are assumed to be procedural and affine. Figure 1 shows an illustrative example of behavioral specification with 6 nested loops. This framework employs a formal model operating with integral polyhedra, using techniques specific to the data-dependence analysis employed in modern compilers. Different from previous works, three optimization problems - the data assignment to the memory layers (on-chip scratch-pad memory and off-chip DRAM), the mapping of multidimensional signals to the physical memories, and the banking of the on-chip memory (see Figure 2) - are addressed in a consistent way, based on the same formal model. The main design target is the reduction of the static and dynamic energy consumption in the memory subsystem, but the same formal model and algorithmic principles can be applied for the reduction of the overall time of access to memories, or combinations of these design goals.
-
-
-
Dynamic Simulation Of Internal Logistics In Aluminum Production Processes
More LessThe production of aluminum products, like other metallurgical industries, comprehends of a large variety of material processing steps. These processes require a multitude of material handling operations, buffers and transports to interconnect single process steps within aluminum smelters or downstream manufacturing plants like rolling mills or extrusion plants. On the one hand, the electrolysis process as the core process of primary aluminum production requires an amount of material factors more than the volume of metal produced. On the other hand, downstream manufacturing processes are showing an enormous variation of mechanical properties and surface qualities and comprise many fabrication steps including forming, heat treatment and finishing that can appear in an arbitrary order. Therefore the internal logistics composing the entire internal material flow of one production facility is increasingly regarded as one key success factor for efficient production processes as part of the supply-chain management. Dynamic simulations based on discrete event simulation models can be effective tools to support planning processes along the entire value chain in aluminum production plants. Logistics simulation models ideally accompany also improvement and modernization measures and the design of new production facilities to quantify the resulting overall equipment effectiveness and the reduction of energy consumption and emissions. Hydro Aluminium has a long history in solving logistic challenges using simulation tools. Limitations of former models have been the starting point of a further development of simulation tools based on more flexible models. They address the streamlining of operations and transportation, in particular in aluminum smelters, and material flow problems as well as new plant concepts as support to investment decisions. This presentation is giving firstly a brief introduction into the main upstream and downstream processes of aluminum production to understand the different driving forces of material flow. Secondly the principles of mapping the specific material flow of aluminum smelters and in typical downstream manufacturing plants are outlined. Examples demonstrate the benefit of a systematic modeling approach.
-
-
-
A robust method for line and word segmentation in handwritten text
More LessLine and word segmentation is a key-step in any document image analysis system. It can be used for instance in handwriting recognition when separating words before their recognition. Line segmentation can also serve as a prior step before extracting the geometric characteristics of lines which are unique to each writer. Text line and word segmentation is not an easy task because of the following problems: 1) text lines do not all have the same direction in the handwritten text; 2) text lines are not always horizontal which makes their separation more difficult; 3) characters may overlap between successive text lines; 4) it is often confusing to distinguish between inter and intra word distances. In our method, line segmentation is done by using a smoothed version of the handwritten document which makes it possible to detect the main line components using a subsequent thresholding algorithm. The connected components of the resulting image are then assigned to a separate label which represents a line component. Then, each text region which intersects only with one line component is assigned to the same label of that line component. The Voronoi diagram of the image thus obtained is then computed in order to label the remaining text pixels. Word segmentation is performed by computing a generalized Chamfer distance in which the horizontal distance is slightly favored. This distance is subsequently smoothed in order to reflect the distances between word components and neglect the distance to dots and diacritics. Word segmentation is then performed by thresholding the distance thus obtained. The threshold depends on the characteristics of the handwriting. We have therefore computed several features in order to predict it, including: the sum of maximum distances within each line component, the number of connected components within the document and the average width and height of lines. The optimal threshold is then obtained by training a linear regression of those features on a training set of about 100 documents. This method achieved the best performance on the ICFHR Handwriting Segmentation Contest dataset reaching a matching score of 97.4% on line segmentation and 91% on word segmentation. The method has also been tested on the QUWI Arabic dataset reaching 97.1% on line segmentation and 49.6% on word segmentation. The relatively low performance of word segmentation in Arabic script is due to the fact that words are very close to each other with respect to English script. The proposed method tackles most of the problems of line and word segmentation and achieves high segmentation results. It can however be improved by combining it with a handwriting recognizer which will eliminates words which are not recognized.
-
-
-
Optimizing Qatar steel supply chain management system
More LessWe have developed a linear programming formulation to describe Qatar steel manufacturing supply chain from suppliers to consumers. The purpose of the model is to provide answers regarding the optimal amount of raw materials to be requested from suppliers, the optimal amount of finished products to be delivered to each customer and the optimal inventory level of raw materials. The model is validated and solved using GAMS software. Sensitivity analysis on the proposed model is conducted in order to draw useful conclusions regarding the factors that play the most important role in the efficiency of the supply chain. In the second part of the project, we have set a simulation model to produce a specific set of Key Performance Indicators (KPIs). KPIs are developed to characterize the supply chain performance in terms of responsiveness, efficiency, and productivity/utilization. The model is programmed using WITNESS simulation software. The developed QS WITNESS simulation model aims to assess and validate the current status of the supply chain performance in terms of a set of KPIs, taking into consideration the key deterministic and stochastic factors, from suppliers and production plant processes, to distributors and consumers. Finally, a simulated annealing algorithm has been developed that will be used to set model variables to achieve a multi-criteria tradeoff of the defined supply chain KPIs.
-
-
-
An ultra-wideband RFIC attenuator for communication and radar systems
By Cam NguyenAttenuators are extensively employed as an amplitude control circuit in communication and radar systems. Flatness, attenuation range, and bandwidth are typical specifications for attenuators. Most attenuators in previous studies rely on the basic topologies of the Pi-, T-, and distributed attenuators. The performance of the Pi- and T-attenuators, however, is affected substantially by the switch performance of transistors, and it is hard to obtain optimum flatness, attenuation range, and bandwidth with these attenuators. The conventional distributed attenuator also demands a large chip area for large attenuation ranges. We report the design of a new microwave/millimeter-wave CMOS attenuator. A design method is proposed and implemented in the attenuator to improve its flatness, attenuation range, and bandwidth. It is recognized that the Pi- and T-attenuators at a certain attenuation state inherently has the insertion-loss slope increasing as the frequency is increased. This response is owing to the off-capacitance of the series transistors. On the other hand, the distributed attenuators can be designed to have the opposite insertion-loss slope by shortening the transmission lines. The reason is that short transmission lines causes the center frequency be shifted higher. The proposed design method utilizes the two opposite insertion-loss slopes and is implemented for the Pi-, T-and distributed attenuators in a cascade connection. Over 10-67 GHz, the measured results exhibit attenuation flatness of 6.8 dB and attenuation range of 32-42 dB.
-
-
-
Multiphase production metering: Benefits from an Industrial data validation and reconciliation approach
By Simon MansonMultiphase production metering - Benefits from an Industrial Data Validation and Reconciliation approach Simon Manson, Mohamed Haouche, Pascal Cheneviere and Philippe Julien TOTAL Research Center-Qatar at QSTP - DOHA, Qatar Contact: [email protected] Context and objectives TOTAL E&P QATAR (TEPQ) is the Operator of AL-Khaleej offshore Oil field under Production Sharing Agreement (PSA) with Qatar Petroleum. AL-Khaleej field is characterised by a high water Cut (ratio of water over the liquid), which classifies the field as mature. Operating this field safely and cost effectively requires a large involvement of cutting-edge technologies with a strict compliance with internal procedures and Standards. Metering's main objective is to deliver accurate and close to real-time production data from online measurements, allowing the optimization of the operations and the mitigation of potential HSE-related risks. Solution The solution tested by TEPQ is based on a Data Validation and Reconciliation (DVR) approach. This approach is well-known in hydrocarbon downstream sector and power plants. Its added value lies mainly in the automatic reconciliation of several data originating not only from the multiphase flow meters but also from other process parameters. The expected result of this approach is an improvement of data accuracy and increased data availability for operational teams. A DVR pilot has been implemented in the AL-Khaleej field for multiphase flow determination. It performs automatically online data acquisition, data processing and the daily reporting, thanks to a user friendly user interface. Results The communication intends to present the latest findings obtained from the DVR approach. A sensitivity analysis has been performed to highlight the impact of potentially biased data on the integrated production system and on the oil and water production rates. These findings are of high importance for trouble-shooting diagnostic, to identify the source (instruments, process models, etc.) of a malfunction and to define remedial solutions. Oil and water production data, with their relative uncertainties are presented to illustrate the benefits of the DVR approach in challenging production conditions. Conclusions The main benefits from the DVR approach and its user interface lies mainly on the time saving in data post-processing to obtain automatically reconciled data with better accuracy. In addition, thanks to its error detection capability the DVR facilitates troubleshooting identification (alarming).
-
-
-
Dynamic and static generation of multimedia tutorials for children with special needs: Using Arabic text processing and ontologies
More LessWe propose a multimedia-based learning system to teach children with intellectual disabilities (ID) basic concepts in Science, Math and daily living tasks. The tutorials' pedagogical development is based on Mayer's Cognitive Learning Model combined with Skinner's Behaviorist Operant Conditioning Model. Two types of Arabic tutorials are proposed: (1) Statically generated tutorials, which are pre-designed by special needs instructors, and developed by animation experts. (2) Dynamically generated tutorials, which are developed using natural language processing and ontology building. Dynamic tutorials are generated by processing Arabic text, and using machine learning to query the Google engine and generate multimedia elements, which are automatically updated into an ontology system and are hence used to construct a customized tutorial. Both types of tutorials have shown considerable improvements in the learning process and allowed children with ID to enhance their cognitive skills and become more motivated and proactive in the classroom.
-
-
-
A tyre safety study in Qatar and application of immersive simulators
By Max RenaultIn support of Qatar's National Road Safety Strategy and under the umbrella of the National Traffic Safety Committee, Qatar Petroleum's Research and Technology Department in cooperation with Williams Advanced Engineering has undertaken a study of the state of tyre safety in the country. This study has reviewed the regulatory and legislative frameworks governing tyre usage in the country, as well as collected data on how tyres are being used by the populace. To understand the state of tyre usage in Qatar a survey of 239 vehicles undergoing annual inspection was conducted and an electronic survey querying respondents' knowledge of tyres received 808 responses. The findings identified deficiencies in four key areas: accident data reporting, tyre certification for regional efficacy, usage of balloon tyres, and the public's knowledge of tyres and tyre care. Following completion of this study, Qatar Petroleum has commissioned Williams Advanced Engineering to produce an immersive driving simulator for the dual purposes of research and education. This simulator will provide a platform for research investigations of the effect of tyre performance and failure on vehicle stability; additionally this will allow road users, in a safe environment, to experience the effects of various tyre conditions such as a failure, and learn appropriate responses.
-
-
-
Random projections and haar cascades for accurate real-time vehicle detection and tracking
More LessThis paper presents a robust real-time vision framework that detects and tracks vehicles from stationary traffic cameras with certain regions of interest. The framework enables intelligent transportation and road safety applications such as road-occupancy characterization, congestion detection, traffic flow computation, and pedestrian tracking. It consists of three main modules:1) detection, 2) tracking, and 3) data association. To this end, vehicles are first detected using Haar-like features. In the second phase, a light-weight appearance-based model is built using random projections to keep track of the detected vehicles. The data association module fuses new detections and existing targets for accurate tracking. The practical value of the proposed framework is demonstrated with evaluation involving several real-world experiments and variety of challenges.
-
-
-
Conceptual reasoning for consistency insurance in logical deduction and application for critical systems
More LessReasoning in propositional logic is a key element in software engineering; it is applied in different domains, e.g: specification validation, code checking, theorem proving, etc. Since reasoning is a basic component in the analysis and verification of different critical systems, significant efforts have been dedicated to improve its efficiency in terms of time complexity, correctness and generalization to new problems support (e.g., SAT- Problem, Inference-Rules, inconsistency detection, etc.). We propose a new Conceptual Reasoning Method for an inference engine in which such improvements will be achieved by the combination of the semantic interpretations of a logical formula and the formal concept analysis mathematical background. In fact, each logical formula is mapped into a truth table formal context and any logical deduction is obtained by Galois connection. More particularly, we combine all truth tables into a global one which has the advantage of containing the complete knowledge of all deducible rules or, possibly, an eventual inconsistency in the whole system. A first version of the new reasoning system was implemented and applied for medical data. Efficiency in conflicts resolutions as well as in knowledge expressiveness and reasoning were shown. Serious challenges related with time complexity have been faced out and still more improvements are under investigation. "Acknowledgement: This publication was made possible by a grant from the Qatar National Research Fund NPRP04-1109-1-174. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the QNRF."
-
-
-
Alice in the Middle East (Alice ME)
By Saquib RazakWe will present an overview and in-progress report of a QNRF-sponsored project for research in the effects of using program visualization in teaching and learning computational thinking. Computers and computing have made possible incredible leaps of innovation and imagination in solving critical problems such as finding a cure for diseases, predicting a hurricane's path, or landing a spaceship on the moon. The modern economic conditions of countries around the world are increasingly related to their ability to adapt to the digital revolution. This, in turn, drives the need for educated individuals who can bring the power of computing-supported problem solving to an increasingly expanded field of career paths. It is no longer sufficient to wait until students are in college to introduce computational thinking. All of today's students will go on to live in a world that is heavily influenced by computing, and many of these students will work in fields that involve or are influenced by computing. They must begin to work with algorithmic problem solving and computational methods and tools in K-12. Many courses taught in K-12 (for example, math and science) teach problem solving and logical thinking skills. Computational thinking embodies these skills and brings them to bear on problems in other fields and on problems that lie at the intersection of these fields. In the same way that learning to read opens a gateway to learning a multitude of knowledge, learning to program opens a gateway to learning all things computational. Alice is a programming environment designed to enable novice programmers in creating 3-D virtual worlds, including animations and games. In Alice, 3-D models of objects (e.g., people, animals and vehicles) populate a virtual world and students use a drag and drop editor to manipulate the movement and activities of these objects. Alice for the Middle East - "Alice ME" is a research project funded by Qatar National Research Fund (QNRF) under their National Priorities Research Program (NPRP) that aims to modify the well-known Alice introductory programming software to be internationally and culturally relevant for the Middle East. In this project, we are working on developing new 3D models that provide animation objects encountered in daily lives (animals, buildings, clothing, vehicles, etc.) as well as artifacts that reflect and respect the heritage of Qatari culture. The new models will provide characters and images that enable students to create animations that tell the stories of local culture, thereby supporting the QNV 2030 aspiration for maintaining a balance between modernizations and preserving traditions. We are also working with local schools to develop an ICT curriculum centered on using Alice to help students learn computational thinking. We are also designing workshops to train teachers in using Alice and delivering this curriculum.
-
-
-
Awareness of the farmers about effective delivery of farm information by ICT mediated extension service in Bangladesh
More LessThe main focus of the study was to find out the level of awareness about effectiveness of ICT mediated extension service in disseminating farm information to the farmers. The factors influencing awareness of the farmers and the problems faced by the farmers in getting farm information were also explored. Data were collected from a sample of 100 farmers out of 700. A structured interview schedule and check list were used in collection of data through face to face interviewing and focus group discussion (FGD) from the farmers during May to June 2013. The awareness was measured by using a 4 point rating scale and appropriate weights were assigned to each of the responses and by adding the weights awareness score was calculated. Thus, the awareness score of the respondents ranged from 16 to 50 against the possible range of 0 to 64. The effectiveness of ICT was considered based on amount of information being supplied, acceptability of information, usage of information and outcome/benefit by using information by the farmers. About three-fourths (74 percent) of the farmers had moderate awareness while almost one fourth (23 percent) having low and only 3 percent had high awareness about effective delivery of farm information by ICT centers. The level of education, farm size, family size, annual income, training exposure, organizational participation and extension media contact of the farmers were significantly correlated with their awareness about effective delivery of farm information. The stepwise multiple regression analysis showed that out of 9, four variables such as organizational participation, annual income, farm size and family size of the farmers combined accounted for 47.90 percent of the total variation regarding awareness of effective delivery of farm information. Rendering inadequate services of field extension agents, Frequent power disruption, Lack of skilled manpower (extension agents) at ICT centers, Lack of training facilities for the farmers, Poor supervision and monitoring of field extension activities by the superior officers were the major problems mentioned by the farmers for effective dissemination of farm information by ICT mediated extension service. Key words: Awareness, ICT mediated extension service, effective delivery
-
-
-
Translate or Transliterate? Modeling the Decision For English to Arabic Machine Translation
By Mahmoud AzabTranslation of named entities (NEs) is important for NLP applications such as Machine Translation (MT) and Cross-lingual Information Retrieval. For MT, named entities are major subset of the out-of-vocabulary terms. Due to their diversity, they cannot always be found in parallel corpora, dictionaries or gazetteers. Thus, state-of-the-art MT systems need to handle NEs in speciï¬c ways: (i) direct translation which results in missing many out of vocabulary terms and (ii) blind transliteration of out of vocabulary terms which does not necessarily contribute to translation adequacy and may actually create noisy contexts for the language model and the decoder. For example, in the sentence "Dudley North visits North London", the MT system is expected to transliterate "North" in the former case, and translate "North" in the latter. In this work, we present a classification-based framework, that enables MT system to automate the decision of translation vs. transliteration for different categories of NEs. We model the decision as a binary classification at the token level: each token within a named-entity gets a decision label to be translated or transliterated. Training the classifier requires a set of NEs with token-level decision labels. For this purpose, we automatically construct a set of bilingual lexicon of NEs paired with the translation/transliteration decisions from two different domains: We heuristically extract and label parallel NEs from a large word aligned news parallel corpus and we use a lexicon of bilingual NEs collected from Arabic and Wikipedia titles. Then, we designed a procedure to clean up the noisy Arabic NE spans by part-of-speech verification, and heuristically ï¬ltering impossible items (e.g. verbs). For training, the data is automatically annotated using a variant of edit distance measuring the similarity between an English word and its Arabic transliteration. For test set, we manually reviewed the labels and fixed the incorrect ones. As part of our project, this bilingual corpus of named entities has been released to the research community. Using Support Vector Machines, we trained the classifier using a set of token-based, contextual and semantic features of the NEs. We evaluated our classiï¬er both in the limited news and diverse Wikipedia domains, and achieved promising accuracy of 89.1%. To study the utility of using our classifier on an English to Arabic statistical MT system, we deployed it as a pre-translation component to the MT system. We automatically located the NEs in the source language sentences and used the classiï¬er to ï¬nd those which should be transliterated. For such terms, we offer the transliterated form as an option to the decoder. The impact of adding the classifier to the SMT pipeline resulted in a major reduction of out of vocabulary terms and a modest improvement of the BLEU score. This research is supported by the Qatar National Research Fund (a member of the Qatar Foundation) through grants NPRP-09-1140-1-177 and YSREP-1-018-1-004. The statements made herein are solely the responsibility of the authors.
-
-
-
Technology tools for enhancing English literacy skills
By Mary DiasThe goal of this work is to explore the role of technology tools in enhancing the teaching and learning processes for English as a foreign or second language. Literacy is a crucial skill that is often linked to quality of life. However, access to literacy is not universal. Therefore, the significance of this research is its potential impact on the global challenge of improving child and adult literacy rates. Today's globalized world often demands strong English literacy skills for success because the language of instruction and business is frequently English. Even in Qatar's efforts to create a knowledge economy, Education City was established with the majority of instruction in English. Moreover, NGOs such as Reach Out to Asia are partnering with Education City universities to teach English literacy to migrant laborers in Qatar. Many migrant workers reside and work in Qatar for many years and can often improve their job prospects if they speak and understand English. However, Qatar's English literacy problems are not limited to the migrant population. The latest published (2009) PISA (Program for International Assessment) results show that 15-year-olds in Qatar for the most part are at Level one out of six proficiency levels in literacy. Qatar placed among the lowest of the 65 countries that participated in this PISA Assessment. Several research groups have developed technology to enhance literacy skills and improve motivation for learning. Educational games are in increasing demand and are now incorporated into formal education programs. Since the effectiveness of technology on language learning is dependent on how it is used, literacy experts have identified the need for research about appropriate ways and contexts in which to apply technology. Our work shares some goals with the related work, but there are also significant differences. Most educational games and tools are informed by experts on teaching English skills, focused on the students, and act as fixed stand-alone tools that are used outside the school environment. In contrast, our work is designed to complement classroom activities and to allow for customization while remaining cost effective. As such, it seeks to engage parents, teachers, game developers and other experts to benefit and engage learners. Through this work, we engage with different learner populations ranging from children to adults. Expected outcomes of our work include the design, implementation, and testing of accessible and effective computing technology for enhancing English literacy skills among learners across the world. This suite of computer-based and mobile phone-based tools is designed to take into account user needs, local constraints, cultural factors, available resources, and existing infrastructure. We field-test and evaluate our literacy tools and games in several communities in Qatar and in the United States. Through this work, we are advancing the state-of-the-art in computer-assisted language learning and improving the understanding of educational techniques for improving literacy. Our presentation will provide an overview of the motivation for this work, an introduction to our user groups, a summary of the research outcomes of this work to date, and an outline of future work.
-
-
-
Exploiting space syntax for deployable mobile opportunistic networking
More LessThere are many cities where urbanization occurs at a faster rate than that of communication infrastructure deployment. Mobile users with sophisticated devices are often dissatisfied with this lag in infrastructure deployment; their Internet connection is via opportunistic open Access Points for short durations, or via weak, unreliable, and costly 3G connections. With increased demands on network infrastructure, we believe that opportunistic networking, where user mobility is exploited to increase capacity and augment Internet reachability, can play an active role as a complimentary technology to improve user experience, particularly with delay insensitive data. Opportunistic forwarding solutions were mainly designed using a set of assumptions that have grown in complexity, rendering them unusable outside their intended environment. Figure 1 categorizes sample state-of-the-art opportunistic forwarding solutions based on their assumption complexity. Most of these solutions however are not designed for large scale urban environments. In this work, we believe to be the first to exploit the space syntax paradigm to better guide forwarding decisions in large scale urban environments. Space syntax, initially proposed in the field of architecture to model natural mobility patterns by analyzing spacial configurations, offers a set of measurable metrics that quantify the effect of road maps and architectural configurations on natural movement. By interacting with the pre-built static environment, space syntax predicts natural movement patterns in a given area. Our goal is to leverage space syntax concepts to create efficient opportunistic forwarding distributed solutions for large scale urban environments. We address two communication themes: (1) Mobile-to-Infrastructure: We propose a set of space syntax based algorithms that adapt to a spectrum of simplistic assumptions in urban environments. As depicted in Figure 1, our goal is to gain performance improvement across the spectrum, within each assumption category, when compared to other state-of-the-art solutions. We adopt a data driven approach to evaluate the space syntax based forwarding algorithms we propose, within each of three assumption categories, based on large scale mobility traces capturing vehicle mobility. Overall, our results show that our space syntax based algorithms perform more efficiently within each assumption category. (2) Infrastructure-to-Mobile: We propose a new algorithm, Select&Spray, which leverages space syntax, and enables data transfers to mobile destinations reached directly through the infrastructure or opportunistically via other nodes. This architecture consists of: (i) a select engine that identifies a subset of directly connected nodes with a higher probability to forward messages to destinations, and (ii) a spray engine residing on mobile devices that guide the opportunistic dissemination of messages towards destination devices. We evaluate our algorithm using several mobility traces. Our results show that Select&Spray is more efficient in guiding messages towards their destinations. It helps extend the reach in data dissemination to more than 20% of the interested destinations within very short delays, and successfully reaches almost 90% of the destinations in less than 5 minutes.
-
-
-
Face Detection Using Minimum Distance with Sequence Procedure Approach
By Sayed HamdyIn recent years face recognition has received substantial attention from both research communities and the market, but still remained very challenging in real applications. A lot of face recognition algorithms, along with their modifications, have been developed during the past decades. A number of typical algorithms are presented, being categorized into appearance based and model-based schemes. In this paper we will represent a new method for face detection called Minimum Distance Detection Approach (MDDA) . The obtained results clearly confirm the efficiency of the developed model as compared to the other methods in terms of Classification accuracy. It is also observed that the new method is a powerful feature selection tool which has indentified a subset of best discriminative features. Additionally, the proposed model has gained a great deal of efficiency in terms of CPU time owing to the parallel implementation. In this model we use a direct model for face detection without using unlabelled data. In this research we tarry to identify one sample from a group of unknown samples using a sequence of processes . The results show that this method is very effective when we use a large a sample of unlabelled data to detect on sample.
-
-
-
Analysis Of Energy Consumption Fairness In Video Sensor Networks
More LessThe use of more effective processing tools such as advanced video codecs in wireless sensor networks (WSNs) has enabled widespread adoption of video-based WSNs for monitoring and surveillance applications. Considering that in video-based WSN applications large amounts of energy resources are required for both compression and transmission of video content, optimizing the energy consumption is of paramount importance. There is a trade-off between the encoding complexity and compression performance in the sense that high compression efficiency comes at the expense of increased encoding complexity. On the other hand, there is a direct relationship between coding complexity and energy consumption. Since the nodes in a video sensor network (VSN) share the same wireless medium, there is also an issue with fairness of bandwidth allocation per each node. Nevertheless, the fairness of resource allocation (encoding and transmission energy) for nodes placed at different locations in VSNs has a significant effect on energy consumption. In our study, our objective is to maximize the lifetime of the network by reducing the consumption of the node with the maximum energy usage. Our research focuses on VSNs with linear topology where the nth node relays its data through the nth-1 node, and the node closest to the sink relays information from all the other nodes. In our approach, we analyze the relation between the fairness of nodes' resource allocation, video quality and VSNs' energy consumption to propose an algorithm for adjusting the coding parameters and fairness ratio of each node such that energy consumption is balanced. Our results show that by allocating higher fairness ratios to the closest nodes to the sink, we reduce the maximum energy consumption and achieve a more balanced energy usuge. For instance, in the case of a VSN with six nodes, by allocating the fairness ratios between 0.17 to 0.3 to the closer nodes to the sink, the maximum energy consumption is reduced by 11.28%, with standard deviation of nodes' energy consumption (STDen) of 0.09W compared to 0.25W achieved by the maximum fairness scheme.
-
-
-
-
-
Quantifying The Cybersecurity Of Cloud Computing As A Mean Failure Cost
More LessQuantifying the Cybersecurity of Cloud Computing as a Mean Failure Cost
-
-
-
Exploration Of Optical Character Recognition Of Historical Arabic Documents
More LessOver 70,000 historical books exist in Qatar's Heritage books collection that forms an invaluable part of the Arabic Heritage. The digitization of these books will help to improve accessibility to these texts while ensuring their preservation. The aim of this project is to explore Optical Character Recognition (OCR) techniques for digitizing historical Arabic texts. In this project, the techniques for improving the OCR pipeline were explored in three stages. First, an exploration of page layout analysis was conducted. Next, new Arabic Language translation models were built and the recognition rates were analyzed. Finally, an analysis of using various language models for OCR was conducted. An important initial step in the OCR pipeline is the page layout analysis which requires the identification and classification of regions of interest from a scanned page. In many historic Arabic texts scholars have written side notes on the page margins which add significant value to the main content. Thus, an exploration of techniques was conducted to improve the identification of side notes during the digitization of historic Arabic texts. First, an evaluation of text line segmentation was conducted using two notable open source OCR software: OCRopus and Tesseract. Next, geometric layout analysis techniques were explored using OCRopus and MATLAB to identify text line orientation. After the layout analysis, the next step in the OCR pipeline is the recognition of words and characters from the different text lines that are segmented in the page layout step. OCRopus was the main open source OCR software analyzed which directly extracted the characters from the segmented lines. A number of character recognition models were created for extensive training of the OCRopus system. The historical Arabic text data was then tested on the trained OCRopus models for the calculation of character recognition rates. Additionally, another Open source tool called the IMPACT D-TR4.1 was tested to check the accuracy of clustering within the characters of the Historical Arabic text. A later stage in OCR after the recognition of characters is word boundary identification. In written Arabic, spaces appear between individual words, and possibly within a word, which makes word boundary identification problem difficult. This part of the project assumes character level OCR and proceeds from there. For a given stream of characters, word boundaries are to be identified using perplexities of a Language Model (LM), on Character level and Word level. Character level language model is explored in two ways: the first approach uses segment program supported by SriLM toolkit (Stolcke, 2002). The second approach maps the segmentation to an SMT problem, and uses MOSES. Word level language model is also explored in two ways: the first is naive approach, where all possible prior word boundaries are explored per word, and the one with highest probability is chosen. The second approach uses dynamic programming to find the overall boundaries placement to minimize cost, i.e. maximize probability. This work is the result of a project done at QCRI's 2013 summer internship program.
-
-
-
Contextual Spellchecker To Improve Human-Robot Interaction
More LessThis work focuses on developing a contextual spellchecker to improve the correctness of input queries to a multi-lingual, cross-cultural robot receptionist system. Queries that have fewer misspellings will improve the robot's ability to answer them and in turn improve the effectiveness of the human-robot interaction. We focus on developing an n-gram model based contextual spell-checker to correct misspellings and increase the query-hit rate of the robot. Our test bed is a bi-lingual, cross-cultural robot receptionist, Hala, deployed at the Carnegie Mellon University in Qatar's reception area. Hala can accept typed input queries in Arabic and English and speak responses in both languages as she interacts with users. All input queries to Hala are logged. These logs allow the study of multi-lingual aspects, the influence of socio-cultural norms and the nature of human-robot interaction within a multicultural, yet primarily ethnic Arab, setting. A recent statistical analysis has shown that 26.3% of Hala's queries are missed. The missed queries are due to either Hala not having the required answer in the knowledge base or due to misspellings. We have measured that 50% are due to misspellings. We designed, developed and assessed a custom spellchecker based on an n-gram model. We focused our efforts on a spellchecker for the English mode of Hala. We trained our system on our existing language corpus consisting of valid input queries making the spellchecker more specific to Hala. Finally, we adjusted the n in the n-gram model and evaluated the correctness of the spellchecker in the context of Hala. Our system makes use of the Hunspell, which is an engine that uses algorithm based on n-gram similarity, rule and dictionary based pronunciation data and morphological analysis. Misspelled words are passed through the Hunspell spellchecker and the output is a list of possible words. Utilizing the list of words, we apply our n-gram model algorithm to find which word is best suited in a particular context. The model calculates the conditional probability P(w|s) of a word w given the previous sequence of words s, that is, predicting the next word based on the preceding n-1 words. To assess the effectiveness of our system, we evaluate it using 5 different cases of misspelled word location. The table below lists our results, correct indicates when the sentence is correctly spellchecked and incorrect when the sentences did not change after passing through the spellchecker, or the sentences included transliterated Arabic, or were incorrectly spellchecked which resulted in loss of semantics. Refer to table. We observed that context makes the spellchecking of a sentence more sensible resulting in a higher hit rate in Hala's knowledge base. For case 5, despite having more context than the previous cases, the hit rate is lower. This is because other sources of errors were introduced such as user making use of SMS languages or mixture of English and Arabic. In our future work we would like to tackle the above-mentioned problems and also work on a Part-of-Speech tagging system that would help in correcting real-word mistakes.
-
-
-
Speaker Recognition Using Multimodal Neural Networks And Wavelet Analysis
More LessSPEAKER RECOGNITION USING MULTIMODAL NEURAL NETWORKS AND WAVELET ANALYSIS
-
-
-
Building on strengths of indigenous knowledge to promote sustainable development in crisis conditions from community level: The case of Palestine
More LessAbstract This study began focused on the use of traditional knowledge in promoting sustainable development in crisis conditions. Presented the question: How have successful community- level sustainable development efforts undertaken under crisis conditions drawn upon indigenous knowledge to achieve positive outcomes? The study is a cross case analysis. The three cases addressed in this study have explained some of the ways that indigenous knowledge has played significant positive roles in promoting sustainable development for communities living under crisis conditions in Palestine. Indigenous knowledge community based patterns indicated significant focus on strengths of local culture, social cohesion, the integration process, and special advantages for policy implementation from the community level as key components of sustainable development in crisis conditions. This study especially focuses on efforts to implement sustainable development in crisis conditions. As the World Commission on Environment and Development, better known as the Brundtland Commission, explained in its seminal report (1987), the core problem for sustainable development is the need to integrate social development, economic development, and environmental protection to ensure “development that meet the needs of the present without compromising the ability of the future generations to meet their own needs” (World Commission on Environment and Development 1987, p. 8). And as that report and subsequent studies indicate, too often the social development dimension of the living triangle has been ignored or dramatically undervalued as those involved in development have concentrated on economic development and to some extent environmental protection (World Commission on Environment and Development 1987). In addition to the important ongoing problem of lack of focus on social aspects, sustainable development is particularly important and especially challenging in crisis conditions that include war, terrorism, and civil disorder and their aftermath. This research specifically considers the challenges of sustainable development in the Palestinian context with sensitivity to the need for the integration of all three elements of the living triangle and with concern for the special challenges presented by efforts to achieve sustainable development in crisis conditions. The study contributes to theory by analyzing common elements from the case studies and providing a set of testable propositions, grounded in those successful experiences that can be a starting point for building theory. Practically, the study has generated lessons that sustainable development policy implementers and decision makers can learn from when addressing sustainable development in different crisis conditions contexts such as the aftermath of what is called the “Arab Spring” contexts.
-
-
-
Integrated methodological framework for assessing urban open spaces in Doha from inhabitants' reactions to structured evaluations
Authors: Ashraf M. Salama, Fatma Khalfani, Ahood Al-Maimani and Florian WiedmannThe current fast track urban growth is an important characteristic of the emerging city of Doha. However, very few studies have addressed several important growth aspects, including the examination of the way in which the inhabitants comprehend and react to their built environment and the resulting spatial experience. The availability of attractive open spaces is an essential feature of a liveable urban environment, for the inhabitants of cities and urban areas. Such importance is sometimes oversimplified when making decisions about land-use or discussing the qualities of the built form. As a city characterized by rapid development, urban open spaces in Doha are scattered around from its peripheries to its centre. Varying in form, function, and scale, some spaces are often located within enclave developments, or within larger urban interventions, while others represent portions of spaces with dense urban districts or open waterfronts. The objective of this paper is to investigate different parameters relevant to the qualities of the most important urban open spaces in the city. It adopts a multi-layered research methodology. First, a photo interview mechanism was implemented where 100 inhabitants reacted to imagery and the spatial qualities of twelve urban open spaces. Second, a walking tour assessment procedure was applied to assess the functional, perceptual and social aspects of these spaces. Results indicate correlations between inhabitants' reactions and assessment outcomes pertaining to positive and demerit qualities. Conclusions are developed to offer recommendations for improving existing spaces while envisioning responsive parameters for the design of future urban open spaces.
-
-
-
The oral historian: An infrastructure to support mobile ethnography
Authors: David Bainbridge, Annika Hinze and Sally Jo CunninghamQatar's rich Arabic heritage is not only captured in its buildings, artworks and stories, but also in the living memory of its people. Historians and ethnographers work to capture these stories by interviewing people in “oral history” projects, typically one by one. This is, naturally, a slow process, which can only record selected highlights of a people's rich memory. We here introduce a digital infrastructure that enables crowd-sourcing and distribution of Qatar's oral history. The goal is to inspire people to actively participate in shaping their country's historic record. People in Qatar as well as those living overseas will be stimulated to participate, thus weaving a rich heritage tapestry available electronically to Qataris and tourists alike. Each user will have our software installed as an app on their smartphone. While they are moving though their environs, users are prompted to create audio recording using our app. As in an interview with an ethnographer, they are guided through a set of questions, one at a time. The person acting as the Oral Historian creates the dynamically configurable script of questions and further defines time limits for each answer. If a response is brief, the app prompts for more detail, if a response is overly lengthy, the app prompts for closure on that question. Our software automatically captures some basic metadata: the time, GPS location, and length of the recording. It can further prompt for semi-structured metadata from its user, such as descriptions of their surroundings and the period to which the recording refers. Users are also prompted to upload any photos, documents, or videos that might relate to their audio contribution. At the end of a recording, a user is asked if they wish to provide personal data, e.g., name and age. The system does not automatically use registration account details as a user might be accompanied by another person who contributes to the oral history recording. These captured oral histories are grouped into collections and curated by an Oral Historian to prevent misuse and ensure quality. They may additionally decide to periodically publish a new set of questions to the registered users, for example to enrich particular topic areas in the collection. The end user software is available as a smartphone app, whereas the interface for the historian is server-based for ease of use. The underlying infrastructure uses the Open Source digital library system Greenstone, which has a pedigree of two decades (www.greenstone.org). It is sponsored by UNESCO as part of its "Information for All" programme, and its user interface has been translated into over 50 languages. Recent developments enable mobile phone operation and multimedia content. Greenstone therefore provides an ideal platform to deliver this integrated, mobile infrastructure for crowd-sourcing oral history information. The captured oral histories and any accompanying multimedia artefacts are sent to a central library where Greenstone's content management tools are available to the curating Oral Historian. The items are thus integrated into an evolving, public collection that are available to mobile and web users alike.
-
-
-
University roots and branches between ‘glocalization’ and ‘mondialisation’: Qatar's (inter)national universities
More LessAs in many parts of the world, the tertiary education sector in Qatar is growing rapidly, viewed as key to national development on the path to the “knowledge society”. The states of the Islamic world, with a significant but long-obscured past of scientific achievement, are witnessing a contemporary renaissance. The establishment of international offshore, satellite or branch campuses in the Arabian Gulf region emphasizes the dynamism of higher education development there: more than a third of the estimated hundred such university campuses worldwide exist there. Within the context of extraordinary expansion of higher education and science in this region, Qatar presents a valuable case of university development to test the diffusion of an emerging global model not only in quantitative, but also in qualitative terms. With an abbreviated history of several decades, Qatar's higher education and science policies join two contrasting strategies. These contrasting strategies are prevalent in capacity building attempts worldwide: (1) to match the strongest global exemplars through massive infrastructure investment and direct importation of existing organizational ability, faculty, and prestige, and (2) to cultivate native human capital through development of local competence. Thus, university-related and science policy making on the peninsula has been designed to directly connect with global developments while building local capacity in higher education and scientific productivity. Ultimately, the goal is to establish an “indigenous knowledge economy”. To what extent has Qatar's two-pronged strategy succeeded in building such bridges? Does the combination of IBCs and a national institution represent a successful and sustainable path for the future of higher education and science in Qatar — and for its neighbors?
-
-
-
The importance of developing the intercultural and pragmatic competence of learners of colloquial Arabic
More LessLanguage and culture are bound together. In Arabic, courtesy expressions play an important role. Consequently, a learner should be aware of them in order to fully master the Arabic language. The current research studies the compliment responses in Colloquial Arabic and their use when teaching Arabic as a foreign language. The speech act of complimenting was chosen because of the important role it plays in human communication. Compliments strengthen solidarity between the speakers and are an explicit reflection of cultural values. The first part of the study was a comparative ethnographic study on compliment responses in peninsular Spanish and in Lebanese Arabic. 72 selected members of a Lebanese and a Spanish social network participated in the research. The independent variables were: origin, age and gender. In both social networks, parallel communicative situations were created. The participants were linked by kinship or friendship and paid a compliment on the same topic. Secret recordings were used in order to register these communicative interactions and create a corpus formed with natural conversations. Compliment response sequences were analysed following a taxonomy created by the researcher for the specific study of the Spanish and Lebanese corpora. In the Lebanese corpus, formulaic expressions and invocations against the 'evil eye' were used. In Arabic and Islamic societies, it is believed that a compliment could attract the 'evil eye' if it is not accompanied by expressions invoking God's protection. In the Spanish corpus, both long and detailed explanations were frequently used. In the second part of the research, a corpus of courtesy expressions in colloquial Arabic is being built. The corpus of the current research could serve for future studies in the field of Arabic dialectology and sociolinguistics as it is the first one to include all the different Arabic dialects. The researcher will study the relationship between language and culture in Arabic societies. Participants of the second part of research are female Arab University students with an advance proficiency level of English and French. The independent variable is origin. Muted videos are the instrument to collect the data for this study. Three different videos for compliments about physical appearance, belongings and skills were recorded in Beirut and Bahrain. Students are requested to recreate the dialogue between the characters in Colloquial Arabic. The compliment response sequence is collected through this instrument because it enhances the students' creative freedom. The objectives of the study are: - Building a corpus of courtesy expressions in colloquial Arabic and conducting a comparative analysis of the formulaic responses to compliments in all Arabic dialects. - Studying if courtesy expressions are included in textbooks for teaching Arabic as a foreign language and if they are currently taught in Institutions and Universities. The results of the present research have some pedagogical implications. Courtesy expressions in spoken Arabic are essential and therefore should be introduced in the language classroom through real language examples. Developing the pragmatic competence plays an important role in teaching foreign languages and it helps Arabic learners to become intercultural speakers.
-
-
-
Cutting-edge research and technological innovations: The Qatar National Historic Environment Record demonstrates excellence in cultural heritage management
Authors: Richard Cuttler, Tobias Tonner, Faisal Abdulla Al Naimi, Lucie Dingwall and Liam DelaneySince 2008, the Qatar Museums Authority (QMA) and the University of Birmingham have collaborated on a cutting-edge research programme called the Qatar National Historic Environment Record (QNHER). This has made a significant contribution to our understanding of Qatar's diverse cultural heritage resource. Commencing with the analysis of terrestrial and marine remotely sensed data, the project expanded to undertake detailed terrestrial and marine survey across large parts of the country, recording archaeological sites and palaeoenvironmental remains ranging from the Palaeolithic to modern times. The project was not simply concerned with the collection of heritage data, but how that data is then stored and accessed. After consultation with Qatar's Centre for GIS, the project team designed and developed a custom geospatial web application which integrated a large variety of heritage-related information, including locations, detailed categorisations, descriptions, photographs and survey reports. The system architecture is based around a set of REST and OGC compliant web services which can be consumed by various applications. Day-to-day access for all stakeholders is provided via the QNHER Web App client, a fully bilingual Arabic and English HTML5 web application. The system accesses internet resources such as base mapping provided by Google Maps and Bing Maps and has become an invaluable resource for cultural heritage research, management and mitigation and currently holds over 6,000 cultural heritage records. Future development will see modules for survey, underwater cultural heritage, translation and web access for educational institutions. The QNHER geospatial web application has become pivotal in providing evidence-based development control advice for the QMA, in the face of rapid urbanisation, highlighting the importance of research, protection and conservation for Qatar's cultural heritage. However, this application has a much wider potential than simply heritage management within Qatar. Many other countries around the globe lack this kind of geospatial database that would enable them to manage their heritage. Clearly the diversity of cultural heritage, site types and chronologies means that simply attempting to transplant a system directly is inappropriate. However, with the input of regional heritage managers, particularly with regard to language and thesauri, the system could be customised to address the needs of cultural resource managers around the world. Most antiquities departments around the globe do not have country-wide, georeferenced base mapping or access to geospatial inventories. Access to internet resources has major cost-saving benefits, while providing improved mapping and data visualisation. More importantly this offers the opportunity for cultural heritage management tools to be established with minimal outlay and training. The broad approach the project has taken and the technological and methodological innovations it introduced make the QNHER a leader in this field - not only in the Gulf, but also in the wider world.
-
-
-
Veiling in the courtroom: Muslim women's credibility and perceptions
Authors: Nawal Ammar and Nawal AmmarThis presentation provides systematic evidence on an emergent debate about a Muslim woman's dress and her perceived credibility as a witness in a court context. The objective of this research is to understand how Muslim women's dress impacts their perceived credibility. The issue of testifying in western courts while wearing either a head veil (hijab) or a face veil (niqab) has been strongly contested on the grounds that physically seeing the witness's face helps observers judge her credibility (e.g., R. v. N.S., 2010). Canada's Supreme Court (N.S. v. Her Majesty the Queen, et al., 2012) ruled that judges will disallow the niqab "whenever the effect of wearing it would impede an evaluation of the witness's credibility." In 2010, an Australian judge ruled that a Muslim woman must remove her full veil while giving evidence before a jury. In 2007, the UK guidance on victims also indicated that the niqab may affect the quality of evidence given in the court room. All of those decisions and opinions run contrary to systematic research. Psychological studies suggest that nonverbal cues are not only poor indicators of veracity, but that they are the least useful indicators of deception (e.g., DePaulo et al., 2003, Vrij, 2008). This presentation discusses results of a research project that examined Muslim women's credibility. Using a quasi-experimental design, three groups of Muslim women lied or told the truth while testifying about a mock crime: 1) women without any coverings (safirat), 2) women wearing hijab (muhjabat) and 3) women wearing niqab (munaqabat). Videos of these women were then shown to audiences who assessed the credibilityof the witnesses. The research further explored the witness's general perceptions of being an eyewitness within the court system, and to what extent (if any) their dress impacted their perceptions. The presentation fits within the Social Sciences, Arts and Humanities thematic pillar of Qatar's National Research Strategy. It more particularly fits within two of the grand challenges: Managing Transition to a Diversified Knowledge- based Society: Build a knowledge-based society by emphasizing a robust research culture, and Holistic and Systematic Assessment of the Rapidly Changing Environment: Foster motivation, scholarship, and prosperity among Qatari nationals and expatriates along with cultural accommodations that are in-sync with modern practice.
-
-
-
Culture embodied: An anthropological investigation of pregnancy (and loss) in Qatar
Authors: Susie Kilshaw, Kristina Sole, Halima Al Tamimi and Faten El TaherThis paper explores the emergent themes from the first stage of our cross cultural research (UK and Qatar) into pregnancy and pregnancy loss. This paper presents a culturally grounded representation of pregnancy and the experience of pregnant women in Qatar. In order to understand the experience of miscarriage in Qatar, it is necessary to first develop an ethnotheory of pregnancy. This research uses the approach and methods of medical anthropology. However, this project is particularly exciting because of its commitment to interdisciplinary research: the research is informed and led by our collaboration between anthropologists and medical doctors. Ethnographic methods provide an in-depth understanding of the experience of pregnancy and pregnancy loss. Our main method is semi structured interviews, but true to our anthropological foundation, we are combining this with other forms of data collection. We are observing clinical encounters (doctors appointments, sonography sessions) and conducting participant observation, such as accompanying women when they shop in preparation for the arrival of their baby. , The research is longitudinal and incorporates 12 months of ethnographic fieldwork in Qatar. Part of this is following 20 pregnant women throughout their pregnancy to better understand their developing pregnancy, their experience of pregnancy, the medical management of the pregnant body, and the development of fetal personhood. Women are interviewed on several occasions but we are also in contact for more informal knowledge sharing. 40 women who have recently miscarried (40 in Qatar and 40 in the UK) will also be interviewed. However, this paper will focus on our first cohort: pregnant Qatari women. After 6 months of fieldwork in Qatar, we have discovered a number of emergent themes which help us to better understand the social construction of pregnancy in Qatar. This will then allow us to better understand what happens when a pregnancy is unsuccessful. Here we develop a culturally specific representation of pregnancy in Qatar including: the importance of fetal environment on the developing fetus and cultural theories of risk (evil eye, food avoidance). Issues around risk and blame are explored, as these will likely be activated when a pregnancy is unsuccessful. We also look at the experience of pregnancy and how this is impacted upon by past experiences of pregnancy loss (both stillbirth and miscarriage). The importance of motherhood in Qatar is considered, as it is a central concern for our participants. By exploring these themes we are developing a better understanding of the experience of pregnancy in Qatar, which will enable us to shed light on the impact of pregnancy loss on the mother and those around her.
-
-
-
Geolocated video in Qatar: A media demonstration research project
Authors: John Pavlik and Robert E. VanceGeolocated video represents an opportunity for innovation in journalism and media. Reported here are the results of a proof-of-concept research project demonstrating and assessing the process of creating geolocated video in journalism and media. Geolocated refers to tagging video or other media content with geographic location information, usually obtained from GPS data. Geolocation is a growing feature of news and media content. It is being used increasingly in photographs and social media, including Twitter posts. Geolocation in video is a relatively new application. Employing a proof-of-concept method, this project demonstrates how geolocated video (using Kinomap technology) serves several purposes in news and media (see Figure 1). First, geolocation allows the content to be automatically uploaded to Google Earth or other mapping software available online. This enables others anywhere to access that content by location. It is an aspect of Big Data, in that it permits mapping or other analysis of geolocated content. Such analysis can reveal a variety of insights about the production of media content. Second, geolocation, in concert with other digital watermarking, provides a useful tool to authenticate video. Geolocation in a digital watermark is a valuable tool to help establish the veracity of video or other content. Geolocation can help document when and where video produced by users or freelancers or even professionally employed reporters covering a sensitive story was captured. Reporters (or lay-citizens) providing smartphone video of an event can use geolocation to help establish time, date and location. Third, geolocation supports freelance media practitioners in protecting copyright or intellectual property rights by helping provide a strong digital watermark that includes their identity and the precise time, date and location the video was captured. Fourth, geolocated video represents an opportunity for a new approach to storytelling including in digital maps. Geolocated videos have been produced and made available on Google Earth. Viewing can occur immersively with mobile or wearable devices (e.g., augmented reality, Google Glass). This project aligns with three of the Qatar Research Grand Challenges. First, it supports Culture, Arts, Heritage, Media and Language within the Arabic Context, providing a medium to foster investment in the nation's legacy in Arabic arts, design, architecture, and cultural programs. Second, it supports Holistic and Systematic Assessment of the Rapidly Changing Environment, exploring the roles of communication (e.g., education, journalism, traditional and social media channels) in fostering awareness of social issues. Third, it supports Sustainable Urbanization-Doha as a smart city, demonstrating a state-of-the-art communications technology especially effective and efficient in facilitating location-based communications challenges and needs.
-
-
-
“Society and Daily Life Practices in Qatar Before Oil Industry, A Historic Study in light of Texts and Archaeological Evidence”
More LessA series of archaeological sites in Qatar have been attesting multifaceted aspects of the society and daily life practices in the eve of oil industry, particularly in the period from the 17th to the mid-20th century. The archaeological walled town of Al-Zubarah, for instance, has been excavated, first in the early 1980s by a Qatari mission, and since 2010 by the University of Copenhagen, in partnership with Qatar Museum Authority. Due to its outstanding cultural importance to the common heritage of humanity, the town of al-Zubarah (from ca. 17th century to the mid-20th century) has recently been inscribed on the UNESCO World Heritage List. The geostrategic location of the town on the north western coast alongside with its environmental landscape and physical remains such as the sea port, the fortified canal leading to the former Murayr and the rich archaeological discoveries, are attesting the town's role as a major pearl and trade center in the Gulf region. In addition, the uncovered quarters, palaces, courtyard houses and huts alongside with the town mosque, market and the other domestic architecture, are essential components of a major Islamic trade center, planned and built according to the Islamic law (Shari'a) and local social traditions. In addition to the uncovered architecture, the revealed material culture, particularly the large assemblages of vessels and tools made of different material for different purposes and originated from various regions are cinsidered a primary physical source for reconstructing multifaceted aspects of the social history, society inter-relations, daily life practices and contact with neighboring and far cultures. In light of texts, archaeological record and the field observations of the author, particularly during his excavation at al-Zubarah, this paper endeavors to reconstruct Qatar society and daily life practices in the period from the 17th through the mid-20th century, focusing on the following points: - Society and gender immediate needs in light of the uncovered architecture and in context with the Islamic law (Shari'a) and local traditions. - Communal identity and daily life practices in light of the uncovered material culture such as tools and vessels. - Evidences of contact with the surrounding regions and cultures.
-
-
-
Different trajectories to undergraduate literacy development: Student experiences and texts
Authors: Silvia Pessoa, Ryan Miller and Natalia GattiThis presentation draws on data from a four-year longitudinal study of undergraduate literacy development at an English-medium university in Qatar. While previous studies have documented literacy development at the primary and secondary school levels (Christie & Derewianka, 2010, Coffin, 2006) and much has been researched about the nature of writing genres at the graduate and professional levels (Hyland, 2009, Swales, 1990, 2004), there is a limited body of research on writing at the undergraduate level (Ravelli & Ellis, 2005). The limited work at this level has been either largely qualitative (Leki, 2007, Sternglass, 1997) or primarily text-based (Byrnes, 2010, Colombi, 2002, Nessi, 2009, 2011, North, 2005, Woodward-Kron, 2002, 2005, 2008). Recently, drawing on systemic functional linguistics (SFL) and genre pedagogy, the work of Dreyfus (2013), and Humphrey and Hao (2013) has begun to shed light on the nature of disciplinary writing and writing development at the undergraduate level. However, there is much to learn about the nature of undergraduate writing. This study aims to contribute to this area by examining faculty expectations, student trajectories, and development from a text-based and ethnographic approach. This presentation reports on different trajectories of academic literacy development by presenting four case studies of multilingual students at an English-medium university in the Middle East. While there have been several studies that have looked at the developmental literacy trajectories of undergraduate students (Leki, 2007, Sternglass, 1997, Zamel, 2004) using case studies, they have not closely and systematically examined writing development from a text-based approach. This paper aims to contribute to the growing interest in understanding the nature of undergraduate writing, especially among multilingual students, by using detailed case studies and analysis of student writing longitudinally. The presenters will describe the college experiences of four students and present longitudinal analysis of their writing over four years in the disciplines of business administration and information systems. The findings suggest that students enter the university with differing pre-college experiences that shape their college experiences and impact their rate of development. While personal, social, and academic development is documented in all cases, there are differences between those who came in with a strong academic background in English and those that had limited experiences with English academic reading and writing. Using the tools of Systemic Functional Linguistics (Halliday, 1984), the text analysis of student writing shows development as their writing progressively becomes more academic (with increasing use of nominalizations and abstractions), analytical (with increasing use of evaluations), and better organized, with differences among the four case studies. Overall, the findings suggest that while weaker students do improve their literacy skills while in college, many still graduate with inconsistencies and infelicities in their writing. Documenting the literacy development of university students in Qatar is pivotal as Qatar continues to invest in English-medium education to build its human capital. This project aims to generate insights for curricular planning and assessment in Qatar and a basis for research on academic literacy development that will be of interest to scholars internationally.
-
-
-
Life satisfaction among female doctors vs. other female workers in Gaza
Authors: Sulaiman Abuhaiba, Khamis Elessi, Samah Afana, Islam Elsenwar and Arwa AbudanBackground: 50% of those who had ever used an on-line life satisfaction measurement tool were considered optimally satisfied about their lives. Categorization into age, sex, country of origin and religion did not seem to affect the results of the on-line database for life satisfaction scores. In Gaza, Palestine, most of the public believed that being a female doctor would kill any form of life enjoyment and it is a common belief here that doctors tend to wait longer than other female workers are before getting into a stable marital relationship. The aims of our study were to quantify life satisfaction among female doctors in Gaza, compare their results with those from other work sectors and finally to prove that a medical career does not affect adversely life satisfaction for Gazian female doctors. Methods: We used random sample tables to choose the work places for our sample groups. We have interviewed any female worker at the given facility using convenient sampling technique. 50 female doctors and 50 other workers were compared to each other using objective standard measurement tool for life satisfaction which was composed of 14 specific questions with a possible total score from (14 to 70) where 70 is the most satisfied total score and a total score of more than 50 was the cut-off for defining satisfaction. Total average scores and average scores for each question were compared between the two groups using statistical analysis methods. The frequency of use of over the counter medications was also compared between the two groups. Results: Average age for female doctors (FD) and other workers (OW) was comparable (FD 30.16 years, OW 30.4 years). Response rate was 90% between both groups. Average age, number of children per family and matched scores for the 14 questions, were all of no statistical significant difference between married female doctors and married other workers (p = 0.4; 0.7 and 0.6 respectively). Life satisfaction among married female doctors and other workers was not statistically significantly different between the two groups (FD 13/25 VS OW 9/25); p = 0.4. Average age, matched average scores for each of the 14 questions and life satisfaction proportions were not statistically significantly different between single females of the two groups (p = 0.2; 0.4; and 1.0 respectively). Use of the over the counter drugs was statistically more commonly reported among single female doctors; p = 0.02. Interpretation: We have proved that there is no real association between being a female doctor in Gaza and having a low life satisfaction score. We can assure our female doctors they do not have lower enjoyment of their lives compared to other female workers. The average age for female singles between the two groups was not different which stands against the wide belief in our society that female doctors tend to get married later than other workers. Finally, our single female doctors should be discouraged about the non-rationale use of over the counter drugs.
-
-
-
Learning how to survey the Qatari population
More LessUniversal surveys—such as the World Values Survey—seek to promote generalizability across contexts. But what if two different cultures interpret and respond to a general question in two different ways (e.g., King et al. 2004)? Using this methodological conundrum as a starting point, over the past year I have a led a research team of four faculty and twelve students—drawn from Northwestern University in Qatar, Qatar University, and Georgetown University in Qatar—in creating, translating, and analyzing a context-sensitive survey of the Qatari population, funded through a Qatar National Research Fund UREP (Undergraduate Research Experience Program) grant. The survey was conducted by Qatar University's Social and Economic Survey Research Institute (SESRI) from January 15 to February 3 for a total of 798 Qatari respondents, making it a professional and valuable addition to the literature. Further, we were able to insert many questions that had never previously been asked of the population, including ones spanning the relative importance of tradition versus modern symbolism, specific opinions on the national education reforms, personal versus state priorities, satisfaction with particular welfare benefits offered by the state, and measures of religiosity. Both the process of creating a contextualized survey for the Qatari population—including what could and could not be asked, and how sensitive concepts were translated—as well as the fascinating results, which have opened up new avenues of research into the sociopolitical transformations of the Qatari people, are well worth presenting to the community and receiving feedback. Even more importantly, the insights gained from how we contextualized the survey can be applied to improve the current state of social science survey research in Qatar. The explosion of survey research in Qatar—pioneered by the Qatar Statistics Authority and Qatar University's SESRI, and recently joined by the multinational surveys of the World Values Survey, Harris Polling, Zogby's, and the Arab Barometer—demonstrates the need for questions that are contextually and culturally sensitive and ensure full understanding of the Qatari population. The ability to present my work to the Qatar Foundation Annual Research Conference will provide valuable feedback and networking opportunities with likeminded professionals, researchers, and community members in my quest to continue this collaborate and important research effort. Citation: King, Gary, Christopher Murray, Joshua Salomon, and Ajay Tandon. 2004. “Enhancing the Validity and Cross-Cultural Comparability of Measurement in Survey Research.” American Political Science Review 98 (1): 191-207.
-
-
-
Abuse of volatile substances
More Lessملخص بحث تعاطي المواد الطيارة البروفيسور العياشي عنصر وآخرون يتمثل الهدف الرئيسي لهذه الدراسة في التعرف على مدى انتشار تعاطي المواد الطيارة أو "المستنشقات" بين المراهقين في دولة قطر. ويتفرع عن هذا الهدف الرئيسي مجموعة من الأهداف الفرعية. وقد انطلقت الدراسة من مجموعة من التساؤلات الرئيسية مثل: ما هي الخصائص الديموغرافية والاجتماعية لمتعاطي المواد الطيارة. ما الذي أوصلهم إلى تعاطي هذه المواد الطيارة. أين ومتي يتعاطون هذه المواد. ما هي شدة الإقبال عليها، وما طول فترة التعاطي بين المراهقين. ثم ما مدى وعي المراهقين بأضرار هذه المواد على صحتهم الجسمية والنفسية. اعتمدت الدراسة على منهج المسح الاجتماعي الذي يعتبر من أشهر مناهج البحث وأكثرها استخداماً في الدراسات الوصفية، وربما يعود ذلك إلى كونه من أقدم الأساليب التي اعتمد عليها البحث الاجتماعي. كما يعود ذلك إلى ما يوفره من بيانات كثيرة ومعلومات دقيقة، فضلا عن كونها مستقاة من الواقع الفعلي للناس. يتكون مجتمع الدراسة من طلاب وطالبات المرحلتين الإعدادية والثانوية في المدارس المستقلة التابعة للمجلس الأعلى للتعليم في مختلف مناطق الدولة. كانت عينة البحث عشوائية طبقية (Stratified Random Sample) لأنها المناسبة أكثر لمجتمع الدراسة الذي يضم مراهقين من مرحلتين دراسيتين مختلفتين ومن الذكور والإناث. وقد جرى استخدام الاستبيان كأداة لجمع البيانات وبلغ عدد الاستبيانات التي تم تطبيقها في بداية الدراسة 1223 استبيان، وبعد التدقيق جرى استبعاد 25 استمارة لعدم صلاحيتها، وبذلك كان العدد النهائي 1198 استمارة، 2/3 منها من الذكور. توصلت الدراسة إلى مجموعة من النتائج المهمة منها على سبيل المثال: أن نسبة المراهقين المتعاطين بلغت 15.94% من إجمالي العينة البالغة 1198 مراهق، 55% منهم من الذكور و45% من الإناث،، ويمثل القطريون ثلثين منهم، فيما يمثل غير القطريين الثلث الآخر. أما بالنسبة لمتغير العمر فكانت أعلى نسب التعاطي بين المراهقين الذين تتراوح أعمارهم بين 13-18 سنة وبلغت 87% من جملة المتعاطين. يمثل المراهقون في المرحلة الثانوية حوالي ثلثي مجموعة المتعاطين بنسبة 63.4% مقابل الثلث تقريبا من المرحلة الإعدادية 36.6% من مجموع الطلاب المتعاطين البلغ عددهم 191 مفردة. كما كشفت النتائج أن نسب المتعاطين أعلى في المناطق ذات الكثافة السكانية مثل بلدية الدوحة و بلدية الريان، ثم تأتي بعد ذلك المناطق الأقل كثافة سكانية مثل أم صلال، والوكرة والخور بنسب ضعيفة. وأظهرت النتائج أن 50% من المتعاطين تناولوا هذه المواد لأقل من سنة، و16.7% تعاطوا لسنة أو سنتين، ثم 12.5%% من ثلاث إلى أربع سنوات، وأخيرا 14.6% تعاطوا لفترة خمس سنوات فأكثر. ومن النتائج المهمة التي بينتها الدراسة أن البنزين الصناعي ومشتقاته أكثر المواد الطيارة انتشاراً بين المراهقين المتعاطين حيث بلغت نسبة المتعاطين 53.4%، يليه طلاء الأظافر بنسبة 12%، ثم الصمغ بنسبه 8.9% بينما كانت المواد الأخرى أقل استعمالا. أما بالنسبة لأماكن التعاطي، فأظهرت النتائج أن الشارع يأتي في المقدمة، يليه المنزل، ثم المدرسة. وتختلف أماكن التعاطي عند الإناث عن الذكور، حيث تتعاطى الإناث في المنزل بالدرجة الأولى، بينما يحتل الشارع الرتبة الأولى لدى الذكور. الكلمات المفتاحية: تعاطي- المواد الطيارة- المستنشقات- المراهقون- التأثيرات الصحية
-
-
-
Training model to develop the Qatar workforce using emerging learning technologies
By Mohamed AllyThe Qatar National Vision aims at “transforming Qatar into an advanced country by 2030, capable of sustaining its own development and providing for a high standard of living for all of its people for generations to come”. The grand challenge of Human Capacity Development aims to develop sustainable talent for Qatar's knowledge economy in order to meet the needs for a high-quality workforce. In order for Qatar to achieve its 2030 National Vision and become an advanced country by 2030, it has to train its citizens to function in a globalized and competitive world. Important skills for Qatari to function in the 21st century are communication and use of emerging technologies skills. This presentation will propose a training model to develop the Qatar workforce for the 21st century using emerging learning technologies. The training model was based on a mobile learning research project funded by the Qatar Foundation through the Qatar National Research Fund. The project is a collaborative research project with Qatar University, Qatar Petroleum, Qatar Wireless Innovation Centre, and Athabasca University, Canada. The project developed and implemented training lessons on Communication Skills for the oil and gas industry using mobile technology to deliver the training. The workers were employed at Qatar Petroleum and completed the training as part of their professional development to improve their English communication skills. Results from the project showed that workers performance improved after they completed the training and they reported that use of mobile technology to deliver the training provides flexibility for learning on the job. They suggested that the training should be more interactive and game-like. This is important since today's young workers are comfortable using mobile technologies and they need to be motivated to learn using the mobile technologies. The proposed Qatar National Training Model (QNTM) (Figure 1) is based on the mobile learning research project funded by the Qatar Foundation through the Qatar National Research Fund. In the QNTM, the learner/trainee/worker is at the center of the learning since the goal of training is to provide the knowledge and skills to improve workers' performance on the job. The design of the training must follow good learning design principles including preparing the learner for the training, providing activities for the learners to complete to improve their knowledge and skills, allowing learners to practice to improve their performance, certifying learners based on their performance, and providing opportunities for learners to transfer what they learn to the job environment. The delivery of the training should be flexible using a blended approach that includes face-to-face, hands-on, E-learning, mobile learning, and online learning. A variety of learning strategies such as practice with feedback, tutorials, simulations, games, and problem solving can be used depending on the learning outcomes to be achieved. The proposed Qatar National Training Model will allow for learner-centered training, lifelong learning, just-in-time learning, learning in context, developing skills required for 21st century learning, and interaction between learners and between learners and the trainer using social media.
-
-
-
Building independent schools capacity in Qatar through school based support program (SBSP): Perceptions of participating schools' teachers and principals
More LessOver the past years, significant and rapid changes in many aspects of society and the world have led countries such as Qatar and others in the Gulf Region to reform their national education systems, focusing on the integration of standards, assessment, and accountability. One of the key elements in most of these reforms is the professional development as a central feature of such educational improvement initiatives for the many contributions it can make. It is reasonably assumed that improving teachers' knowledge, skills, and dispositions is one of the most critical steps to improving students achievements (King & Newmann, 2001). Further, it plays a key role in addressing the gap between educators preparation and standard-based reform. However, proposal from many quarters argue that professional development itself need to be reformed (King & Newmann, 2001). Much of the professional development that is offered for teachers and principals simply does not meet the challenges of the reform movement (Birman, Desimone, Porter, and Garet, 2000). Professional development in Qatar is no exception. Professional development has always taken place in Qatar independent schools. In contrast, "teachers and principals noted a downside to the steep quantity of professional development opportunities: Teachers reported feeling overwhelmed and burned out" (Brewer, et al., 2009, p.50). However, the quality and effectiveness of professional development were highly variable. As evidence, the Supreme Education Council (SEC) in Qatar found that relying primarily on international organizations to deliver staff development have not developed the capacity to prepare current and future educators for the reform schools. Despite a substantial national investment in professional development initiatives, concerns remain about the quality of the educational staff and the subsequent impact on instruction (Brewer, et al., 2009). Further, teachers and principals at independent schools in Qatar have raised important questions on the effectiveness of traditional professional development programs and its impact on their performance. They have attended many professional development programs, yet significant professional development needs remain (Palmer et al., 2010-2011). Another concern is the difficulty for teachers and principals to carve out time during the work day to participate in professional development because of the increased workload that many Qatar Independent school teachers reported. Most of them have to stay after regular working hours and into the evening to attend workshops, so many of their days became quite long (Brewer, et al., 2009, p.50). In response to these concerns and needs, School Based Support Program (SBSP) was launch in September 2011 , by the National Center for educator development (NCED) at Qatar university, to address some of the concerns and needs noted. In particular, to conduct high quality, practical, and school based professional learning activities derived from research-based best practices, to significantly improve the performance of the participating independent schools and their principals and teachers professional practices . Therefore, the purpose of the study was to measure the impact of SBSP program as perceived by participating schools' principals and teachers.
-