- Home
- Conference Proceedings
- Qatar Foundation Annual Research Forum Proceedings
- Conference Proceeding
Qatar Foundation Annual Research Forum Volume 2013 Issue 1
- Conference date: 24-25 Nov 2013
- Location: Qatar National Convention Center (QNCC), Doha, Qatar
- Volume number: 2013
- Published: 20 November 2013
401 - 450 of 541 results
-
-
Measurement Of Refractive Indices Of Ternary Mixtures Using Digital Interferometry And Multi-Wavelength Abbemat Refractometer
More LessAbstract Knowing the liquid mixtures properties are significantly important in scientific experimentation and technological development. Thermal diffusion in mixtures plays a crucial role in both nature and technology. The temperature and concentration coefficients of refractive indices so called the Contrast factors contribute to study in various fields including, crude oil experiments (SCCO) and the distribution of crude oil components. The Abbemat Refractometer and Mach-Zehnder Interferometer technique has been proven a precise, highly accurate, and non-intrusive method for measuring the refractive index of a transparent medium. Refractive indices for three ternary mixtures containing three hydrocarbon compositions and their pure components of Tetrtahydronaphtalenene (THN), Isobutylbenzen (IBB), and Dodecane (C12), used mainly in gasoline, were experimentally measured using both the Mach-Zehnder Interferometer and a multi-wavelength Abbemat refractometer. Temperature and concentration coefficients of refractive indices, or contrast factors, as well as their individual correlation to calculate refractive indices have been presented in this research. The experimental measurements were correlated with a wide range of temperatures and wavelengths over a broad range of compositions. The experimental measurements of the refractive indices were compared with those estimated by applying several mixing rules: Lorentz-Lorenz, Gladstone-Dale, Arago-Biot, Eykman, Wiener, Newton, and Oster predictive equations. The experimental values of refractive indices are in substantial agreement with the values obtained by L-L, G-D, and A-B equations, and excepting values obtained by Oster and Newton equations. The temperature, concentration and wavelength dependence of refractive index in mixtures agrees with published data. A comparison with available literature and mixing rules shows that new correlations can predict the experimental data with deviations of less than 0.001.
-
-
-
Top ten hurdles on the race towards the Internet of Things
By Aref MeddebWhile the Internet is continuously growing and unavoidable moving towards an ubiquitous internetworking technology, ranging from casual data exchange, home entertainment, security, military, healthcare, transportation, and business applications, to the Internet of Things (IoT) where anything can communicate anywhere, anytime, there is an increasing and urgent need to define global standards for future Internet architectures as well as service definitions. In the Future, the Internet will perhaps become the most exciting entertainment mean. It is very likely that we would be able to make 3D conferencing at home, using holographic projection, spatial sound and haptic transmission. Further, we may no longer use traditional devices such as laptop and desktop computers and cell phones. Monitors and speakers will be completely different ranging from 3D-glasses and headsets to 3D-contact lenses, and in-ear wireless earphones. Keyboard and mouse will disappear as well and will be replaced by biometrics such as voice, iris and finger print recognition, and movement detection. Many projects are being proposed to support and promote the Internet of Things worldwide, ranging from Initiatives (http://www.iot-i.eu/public), Alliances (http://www.ipso-alliance.org/), Forums (http://www.iot-forum.eu/), Consortiums (http://iofthings.org/), Architectures (http://www.iot-a.eu/), Research Clusters (http://www.internet-of-things-research.eu/), and the list goes on. There is seemingly a race towards the leadership on the Internet of things, to the extent that often, driven by "time-to-market" constraints and business requirements, most proposals made so far fail to consider all facets required to deliver the expected services, and often lead to ambiguous and even contradictory definitions of what the Internet of Things is meant to be. For example, some proposals assimilate IoT to Web 3.0, while others claim it is primarily based on RFID and systems alike. Between web semantics and cognitive radio communications, there is a huge gap that needs to be filled out by adequate communication, security, and application software and hardware. Unfortunately, so far, there is no general consensus on the most suitable underlying technology and applications that will help the Internet of Things become a reality. In this presentation, we describe ten of the most challenging hurdles that IoT actors need to face in order to reach their goals. Among these hurdles, we identify the top-ten issues as 1) how to safely migrate from IPv4 to IPv6 given the proliferation of IPv4 devices, 2) Regulation, Pricing and Neutrality of the Future Internet, 3) OAM tools required for accounting and monitoring Future Internet Services, 4) Future Quality of Service definitions and mechanisms, 5) Reliability of Future Internet Services, 6) Future Security Challenges, 7) Scalability of the Internet of Things with its billions of interworked devices, 8) Future Access (local loop) technologies, 9) Future Transport (long distance) telecommunication technologies, and 10) Future End-user Devices and Applications. We aim to briefly explain and highlight the impact of each one of these issues on the service delivery in the context of the Internet of things. We also intend to describe some existing and promising solutions available so far.
-
-
-
The HAIL platform for big health data
Authors: Syed Sibte Raza Abidi, Ali Daniyal, Ashraf Abusharekh and Samina AbidiBig data analytics in health is an emerging area due to the urgent need to derive actionable intelligence from the large volumes of healthcare data to efficiently manage the healthcare system and to improve health outcomes. In this paper we present a ‘big’ healthcare data analytics platform—termed as Healthcare Analytics for Intelligence and Learning (HAIL)—that is an end-to-end healthcare data analytics solution to derive data-driven actionable intelligence and situational awareness to inform and transform health decision-making, systems management and policy development. The innovative aspects of HAIL are: (a) the integration of data-driven and knowledge-driven analytics approaches, (b) a sand-box environment for healthcare analysts to develop and test health policy/process models by exploiting a range of data preparation, analytical and visualization methods, (c) the incorporation of specialized healthcare data standards, terminologies and concept maps to support data analytics, and (d) text analytics to analyze unstructured healthcare data. The architecture of HAIL comprises the following four main modules (fig 1): (A) Health Data Integration module that entails a semantics-based metadata manager to synthesize health data originating from a range of healthcare institutions to formulate a rich contextualized data resource. The data integration is achieved through ETL workflows designed by health analysts and researchers, (B) Health Analytics Module provides a range of healthcare analytics capabilities including, (i) Exploratory Analytics using data mining to perform data clustering, classification and association tasks, (ii) Predictive Analytics to predict future trends/outcomes derived from past observations of the healthcare processes, (iii) Text Analytics to analyze unstructured texts (such as clinical notes, discharge summaries, referral notes, clinical guidelines, etc.), (iv) Simulation-based Analytics to simulate what-if questions based on simulation models, (v) Workflow analytics to interact with modeled clinical workflows to understand the affects of various confounding factors, (vi) Semantic Analytics to infer contextualized relationships, anomalies and deviations through reasoning over a semantic health data model, and (vii) Informational Analytics to present summaries, aggregations, charts and reports, (C) Data Visualization Module offers a range of interactive data visualizations, such as geospatial visualizations, causal networks, 2D and 3D graphs, pattern clusters and interactive visualizations to explore high dimensional data, (D) Data Analytics Workbench is an interactive workspace to enable data health analysts to specify and set-up their analytics process in terms of data preparation, selection and set-up of analytical methods and the selection of visualization methods. Using the workbench analysts can design sophisticated data analytics workflows/models using a range of data integration, analytical and visualization methods. The HAIL platform is available via a web portal and the desktop application, and it is deployed on a cloud infrastructure. HAIL can be connected with existing health data sources to provide front-end data analytics. Fig 2 shows the technical architecture. The HAIL platform has been applied to analyze a real-life clinical healthcare situations using actual data from the provincial clinical data warehouse. We will present two case studies of the use of HAIL. In conclusion, the HAIL platform addresses a critical need for healthcare analytics to impact health decision-making, systems management and policy development.
-
-
-
Utility of a comprehensive Personal Health Record (PHR) strategy
More LessA fundamental component of improving health care quality is to include the patient as a participant in achieving their health goals and making decisions about their care. While many may see clinical information systems as a way to improve communication and track health information for clinicians, the role played by personal health records (PHRs) should not be overlooked in developing a modernized Integrated Health Management system. This presentation will outline the positives and negatives of sharing clinical data between providers and their patients, outline a few PHR functionalities that can support care coordination and health outcomes, and describe the components of a national strategy to support the development of PHRs. A PHR, also referred to as a patient portal, is a system that allows an individual to keep track of their health information. While access to information is important for patients, doctors may feel uncomfortable with the idea of sharing information with their patients in this manner. Alternatively, patients may worry about sharing information they would like to keep private. These concerns can often be controlled with the right policies to prevent unexpected use and sharing of the information such as: * providing data segmentation functionalities that allow patients to share only selected data with certain providers, * sharing items such as medications and test results, but not sharing other information that providers might like to keep private such as "notes". Many PHRs draw information out of an existing clinical information system maintained by the physician. If a patient sees providers over multiple systems, the information can still be difficult to track. The most effective PHR allows data from any system to be included. To increase usefulness, a PHR may also store information that the patient adds themselves and provide tools such as medication reminders. Ideally, if a patient is seeing different doctors, those doctors will have information from everyone on the care team, but this does not always happen. If the patient has access to the information, they can share it with a range of providers- improving care coordination at the point of care, where it matters most. A PHR can be useful to all members of a population, and should be considered when planning a clinical information system implementation strategy. However, because different hospitals may provide different systems, it is helpful to have a government-wide strategy. For example, in the United States the government has developed a lightweight standard technical infrastructure based on secure messaging which is being used to populate PHRs. The government also supports development of governance to outline standard policies that ensure all stakeholders can trust the information in the system. A strategy to coordinate the use of PHRs is envisioned to have potential in a number of areas, from improvements in individual health to leveraging these systems to improve research on quality and health behaviors.
-
-
-
Service-based Internet Infrastructure for the Qatar World Cup 2022
More LessThe shortcomings of today's Internet and the high demand for complex and sophisticated applications and services drive a very interesting and novel research area called Future Internet. The area of Future Internet research focuses on developing a new network with similar magnitude as today's Internet but with more demanding and complex design goals and specifications. It strives to solve the issues identified in today's Internet by capitalizing on the advantages of emerging new technologies in the area of computer networking such as Software Defined Networking (SDN), autonomic computing, and cloud computing. SDN represents an extraordinary opportunity to rethink computer networks, enabling the design and deployment of a future Internet. Utilizing these technologies leads to a significant progress in the development of an enhanced, secure, reliable, and scalable Internet. In 2022, Qatar will host the world cup where more than 5 Million people from all over the world are expected to attend. This event is expected to put the host country, Qatar, under massive pressure and huge challenges in terms of providing high quality Internet service especially with the increasing demand for emerging applications such as video streaming service over mobile devices. It is vital to evaluate and deploy state-of-the-art technologies based on a promising future Internet infrastructure in order to provide high-quality Internet services unique to this event and to the continuing rapid growth of the country. The main goal in this paper is to present a new network design with similar magnitude as today's Internet but with more demanding and complex design goals and specifications defined by the knowledge and experience gathered from four decades of using the current Internet. This research focuses on the development of a complete system, designed with the new requirements of the Future Internet in mind, and aims to provide, monitor and enhance the increasing demand for popular video streaming service. The testing environment was built using the Global Environment for Network Innovations (GENI) testbed (see Figure 1). GENI, a National Science Foundation (NSF) funded research and development effort, aims to build a collaborative and exploratory network experimentation platform for the design, implementation and evaluation of future networks. Because GENI is meant to enable experimentation with large-scale distributed systems, experiments will in general always contain multiple communicating software entities. Large experiments may well contain tens of thousands of such communicating entities, spread across continental or transcontinental distances. The conducted experiments illustrate how such a system can function under unstable and changing network conditions, dynamically learn its environment, recognize potential service degradation problems, and react to these challenges in an autonomic manner without the need for human intervention.
-
-
-
Advanced millimeter-wave concurrent 44/60GHz phased array for communications and sensing
Authors: Jaeyoung Lee, Cuong Huynh, Juseok Bae, Donghyun Lee and Cam NguyenWireless communications and sensing have become an indispensable part of our daily lives from communications, public service and safety, consumer, industry, sports, gaming and entertainment, asset and inventory management, banking to government and military operations. As communications and sensing are poised to address challenging problems to make our lives even better under environments that can potentially disrupt them, like highly populated urban areas, crowded surroundings, or moving platforms, considerable difficulties emerge that greatly complicate communications and sensing. Significantly improved communication and sensing technologies become absolutely essential to address these challenges. Phased arrays allow RF beams carrying the communication or sensing information to be rapidly steered or intercepted from different angles across areas with particular amplitude profiles electronically, enabling swift single- or multi-point communications or sensing over large areas or across many targets while avoiding potentially disrupting obstacles. They are particularly attractive for creating robust communication links for both line-of-sight (LOS) and non-line-of-sight (NLOS) due to their high directivity and scanning ability. In this talk, we will present a novel millimeter-wave dual-band 44/60 GHz phased-array frontend capable of two-dimensional scanning with orthogonal polarizations. This phased array particularly resolves the "RF signal leakage and isolation dilemma" encountered in the existing phased-array systems. It integrates "electrically" the phased-array functions in two separate millimeter-wave bands into a single phased array operating concurrently in dual-band. These unique features, not achievable with existing millimeter-wave phased arrays, will push the phased-array system performance to a next level, while reducing size and cost, and enhance the capability and applications for wireless communications and sensing, particularly when multifunction, multi-operation, multi-mission over complex environments with miniature systems become essential. This phased array enables vast communication and sensing applications, either communication or sensing or both simultaneously - for instance, concurrent Earth-satellite/inter-satellite communications, high-data-rate WPANs and HDMI, and accurate, high-resolution, enhanced-coverage multi-target sensing.
-
-
-
Baseband DSP for dirty RF front-ends: Theory, algorithms, and test bed
Authors: Ozgur Ozdemir, Ridha Hamila and Naofal Al-DhahirOrthogonal Frequency Division Multiplexing (OFDM) is widely adopted as the transmission scheme of choice for almost all broadband wireless standards (including WLAN, WiMAX, LTE, DVB, etc.) due to its multipath resilience and practical implementation complexity. The direct-conversion radio frequency (RF) front-end architecture , where the down-conversion to baseband is accomplished in a single stage, requires fewer analog components compared to the super-heterodyne architecture where the down-conversion is accomplished with one or more intermediate frequency (IF) stages. Fewer analog components result in reduced power consumption and cost. However, direct-conversion OFDM based broadband wireless transceivers suffer from several performance-limiting RF/analog impairments including I/Q imbalance and phase noise (PHN) [1]. I/Q imbalance refers to the amplitude and phase mismatches between the in-phase (I) and quadrature (Q) branches at the transmit and receive sides. In an OFDM system with I/Q imbalance, the transmitted signal at a particular subcarrier is corrupted by interference from the image subcarrier. To compensate for the effect of I/Q imbalance, the received signals from each subcarrier and its image subcarrier are processed jointly. PHN refers to the random unknown phase difference between the carrier signal and the local oscillator. In an OFDM transceiver, PHN rotates the signal constellation on each sub-carrier and causes inter-carrier interference between the sub-carriers resulting in significant performance degradation. In this project, we designed novel algorithms for the efficient estimation and compensation of RF impairments including I/Q imbalance and PHN for beamforming OFDM systems such as 4G LTE cellular systems and 802.11 wireless local area networks (WLAN) [2]. Conventional OFDM transceivers ignore I/Q imbalance effects and each OFDM subcarrier is processed separately which causes an irreducible error floor in the bit error rate performance. However, in our proposed method, each OFDM subcarrier and its image subcarrier are processed jointly to mitigate the effects of I/Q imbalance. This novel method is capable of eliminating the error floor and obtaining performance close to the ideal case where no I/Q imbalance exists. Furthermore, we have developed an experimental OFDM testbed to implement the proposed algorithms. Our testbed uses the Universal Software Radio Peripheral (USRP) N210 RF frontends and it is based on packet-based OFDM similar to the IEEE 802.11.a WLAN standard. The baseband processing is done in MATLAB where the MATLAB driver for USRP is used for stream processing of the transmitted and received signals. The measured experimental results demonstrate that the proposed algorithms improve the performance significantly at low implementation complexity. [1] B. Razavi, RF Microelectronics. Englewood Cliff, NJ: Prentice-Hall, 1998. [2] O. Ozdemir, R. Hamila, and N. Al-Dhahir, “I/Q imbalance in multiple beamforming OFDM transceivers: SINR analysis and digital baseband compensation,” IEEE Trans. on Comunications, vol. 61, no. 5, pp. 1914-1925, May 2013. Acknowledgement: This work is supported by Qatar National Research Fund (QNRF), Grant NPRP 09-062-2-035.
-
-
-
MobiBots: Towards detecting distributed mobile botnets
Authors: Abderrahmen Mtibaa, Hussein Alnuweiri and Khaled A HarrasIt is widely known that the state of a patient's coronary heart disease can be better assessed using intravascular ultrasound (IVUS) than with more conventional angiography. Recent work has shown that segmentation and 3D reconstruction of IVUS pull-back sequence images can be used for computational fluid dynamic simulation of blood flow through the coronary arteries. This map of shear stress in the blood vessel walls can be used to predict susceptibility of a region of the arteries to future arteriosclerosis and disease. Manual segmentation of images is time consuming as well as cost prohibitive for routine diagnostic use. Current segmentation algorithms do not achieve a high enough accuracy because of the presence of speckle due to blood flow, relatively low resolution of images and presence of various artifacts including guide-wires, stents, vessel branches, and some other growth or inflammations. On the other hand, the image may be induced with additional blur due to movement distortions, as well as resolution-related mixing of closely resembling pixels thus forming a type of out-of-focus blur.. Robust automated segmentation achieving high accuracy of 95% or above has been elusive despite of work by a large community of researchers in the machine vision field. We propose a comprehensive approach where a multitude of algorithms are applied simultaneously to the segmentation problem. In an initial step, pattern recognition methods are used to detect and localize artifacts. We have achieved a high accuracy of 95% or better in detecting frames with stents and location of guide-wire in a large data-set consisting of 15 pull-back sequences with about 1000 image frames each. Our algorithms for lumen segmentation using spatio-temporal texture detection and active contour models have achieved accuracies approaching 70% in the same data-set which is on the high-side of reported accuracies in the literature. Further work is required to combine these methods to increase segmentation accuracy. One approach we are investigating is to combine algorithms using a meta-algorithmic approach. Each segmentation algorithm computes along with the segmentation a measure of confidence in the segmentation which can be biased on prior information about the presence of artifacts. A meta-algorithm then runs a library of algorithms on a sub-sequence of images to be segmented and chooses the segmentation based on computed confidence measures. Machine learning and testing is performed on a large data base. This research is in collaboration with Brigham and Women Hospital in Boston that provides well over 45,000 frames of data for the study.
-
-
-
Completely automated robust segmentation of intravascular ultrasound images
By Chi Hau ChenIt is widely known that the state of a patient's coronary heart disease can be better assessed using intravascular ultrasound (IVUS) than with more conventional angiography. Recent work has shown that segmentation and 3D reconstruction of IVUS pull-back sequence images can be used for computational fluid dynamic simulation of blood flow through the coronary arteries. This map of shear stress in the blood vessel walls can be used to predict susceptibility of a region of the arteries to future arteriosclerosis and disease. Manual segmentation of images is time consuming as well as cost prohibitive for routine diagnostic use. Current segmentation algorithms do not achieve a high enough accuracy because of the presence of speckle due to blood flow, relatively low resolution of images and presence of various artifacts including guide-wires, stents, vessel branches, and some other growth or inflammations. On the other hand, the image may be induced with additional blur due to movement distortions, as well as resolution-related mixing of closely resembling pixels thus forming a type of out-of-focus blur.. Robust automated segmentation achieving high accuracy of 95% or above has been elusive despite of work by a large community of researchers in the machine vision field. We propose a comprehensive approach where a multitude of algorithms are applied simultaneously to the segmentation problem. In an initial step, pattern recognition methods are used to detect and localize artifacts. We have achieved a high accuracy of 95% or better in detecting frames with stents and location of guide-wire in a large data-set consisting of 15 pull-back sequences with about 1000 image frames each. Our algorithms for lumen segmentation using spatio-temporal texture detection and active contour models have achieved accuracies approaching 70% in the same data-set which is on the high-side of reported accuracies in the literature. Further work is required to combine these methods to increase segmentation accuracy. One approach we are investigating is to combine algorithms using a meta-algorithmic approach. Each segmentation algorithm computes along with the segmentation a measure of confidence in the segmentation which can be biased on prior information about the presence of artifacts. A meta-algorithm then runs a library of algorithms on a sub-sequence of images to be segmented and chooses the segmentation based on computed confidence measures. Machine learning and testing is performed on a large data base. This research is in collaboration with Brigham and Women Hospital in Boston that provides well over 45,000 frames of data for the study.
-
-
-
OSCAR: An incentive-based collaborative bandwidth aggregation system
More LessThe explosive demand for mobile data, predicted to reach a 25 to 50 times increase by 2015, along with expensive data roaming charges, and user expectation to remain continuously connected, are all creating novel challenges for service providers and researchers. A potential approach for solving these problems, is exploiting all communication interfaces available on modern mobile devices in both solitary and collaborative forms. In the solitary form, the goal is to exploit any direct Internet connectivity on any of the available interfaces by distributing application data across them in order to achieve higher throughput, minimize energy consumption, and/or minimize cost. In the collaborative form, the goal is to enable and incentivize mobile devices to utilize their neighbors' under-utilized bandwidth in addition to their own direct Internet connections. Despite today's mobile devices being equipped with multiple interfaces, there has been a high deployment barrier for adopting collaborative multi-interface solutions. In addition, these solutions focus on bandwidth maximization without paying sufficient attention to energy efficiency and effective incentive systems. We present OSCAR, a multi-objective, incentive-based, collaborative, and deployable bandwidth aggregation system that fulfills the following requirements: (1) It is easily deployable by not requiring changes to legacy servers, applications, or network infrastructure (i.e. adding new hardware like proxies and routers). (2) It seamlessly exploits available network interfaces in solitary and collaborative forms. (3) It adapts to real-time Internet characteristics and varying system parameters to achieve efficient utilization of these interfaces. (4) It is equipped with an incentive system that encourages users to share their bandwidth with others. (5) It adopts an optimal multi-objective and multi-modal scheduler that maximizes the overall system throughput, while minimizing cost and energy consumption based on user requirements and system status. (6) It leverages incremental system adoption and deployment to further enhance performance gains. A typical scenario for OSCAR is shown in Figure 1. Our contributions are summarized as follows: (1) Designing the OSCAR system architecture that fulfills the requirements above. (2) Formulating OSCAR's data scheduler as an optimal multi-objective, multi-modal scheduler that takes user requirements, device context information, and application requirements into consideration, while distributing application data across multiple local and neighboring devices interfaces. (3) Developing the OSCAR communication protocol that implements our proposed credit-based incentive system, and enables secure communication between the collaborating nodes and the OSCAR enabled servers. We evaluate OSCAR via implementation on Linux, as well as via simulation, and compare the results to the optimal achievable throughput, cost, and energy consumption rates. The OSCAR system, including its communication protocol, is implemented over the Click Modular Router framework in order to demonstrate its ease of deployment. Our results, which are verified via NS2 simulations, show that with no changes to current Internet architectures, OSCAR reaches the throughput upper-bound. It also provides up to 150% enhancement in throughput compared to current Operating Systems without changes to legacy servers. Our results also demonstrate OSCAR's ability to maintain cost and the energy consumption levels in the user-defined acceptable ranges.
-
-
-
Towards image-guided, minimally-invasive robotic surgery for partial nephrectomy
Introduction: Surgery remains one of the primary methods for terminating cancerous tumours. Minimally-invasive robotic surgery, in particular, provides several benefits, such as filtering of hand tremor, offering more complex and flexible manipulation capabilities that lead to increased dexterity and higher precisions, and more comfortable seating for the surgeon. All in turn lead to reduced blood loss, lower infection and complication rates, less post-operative pain, shorter hospital stays, and better overall surgical outcomes. Pre-operative 3D medical imaging modalities, mainly magnetic resonance imaging (MRI) and computed tomography (CT) are used for surgical planning, in which tumour excision margins are identified for maximal sparing of healthy tissue. However, transferring such plans from the pre-operative frame-of-reference to the dynamic intra-operative scene remains a necessary yet largely unsolved problem. We summarize our team's progress towards addressing this problem focusing on partial nephrectomy (RAPN) performed with a daVinci surgical robot. Method: We perform pre-operative 3D image segmentation of the tumour and surrounding healthy tissue using interactive random walker image segmentation, which provides an uncertainty-encoding segmentation used to construct a 3D model of the segmented patient anatomy. We reconstruct the 3D geometry of the surgical scene from the stereo endoscopic video, regularized by the patient-specific shape prior. We process the endoscopic images to detect tissue boundaries and other features. Then we align, first via rigid then via deformable registration, the pre-operative segmentation to the 3D reconstructed scene and the endoscopic image features. Finally, we present to the surgeon an augmented reality view showing an overlay of the tumour resection targets on top of the endoscopic view, in a way that depicts uncertainty in localizing the tumour boundary. Material: We collected pre-operative and intra-operative patient data in the context of RAPN including stereo endoscopic video at full HD 1080i (da Vinci S HD Surgical System), CT images (Siemens CT Sensation 16 and 64 slices), MR images (Siemens MRI Avanto 1.5T), and US images (Ultrasonix SonixTablet with a flexible laparoscopic linear probe). We also acquired CT images and stereo video from in-silico phantoms and ex-vivo lamb kidneys with artificial tumours for test and validation purposes. Results and Discussion: We successfully developed a novel proof-of-concept framework for prior and uncertainty encoded augmented reality system that fuses pre-operative patient specific information into the intra-operative surgical scene. Preliminary studies and initial surgeons' feedback on the developed augmented reality system are encouraging. Our future work will focus on investigating the use of intra-operative US data in our system to leverage all imaging modalities available during surgeries. Before a full system integration of these components, improving accuracy and speed of aforementioned algorithms, and the intuitiveness of the augmented reality visualization, remain active research projects for our team.
-
-
-
Summarizing machine translation text: An English-Arabic case study
Authors: Houda Bouamor, Behrang Mohit and Kemal OflazerMachine Translation (MT) which has been championed as an effective technology for knowledge transfer from English to languages with less digital content. An example of such efforts is the automatic translation of English Wikipedia to languages with smaller collections, such as Arabic. However, MT quality is still far from ideal for many of the languages and text genres. While translating a document, many sentences are poorly translated which can provide an incorrect text, and confuse the reader. Moreover, some of these sentences are not as informative and could be summarized to make a more cohesive document. Thus, for tasks in which complete translation is not mandatory, MT can be effective if the system can provide an informative subset of the content with higher translation quality. For this scenario, text summarization can provide effective support for MT by keeping only the most important and informative parts of a given document to translate. In this work, we demonstrate a framework of MT and text summarization which replaces the baseline translation with a proper summary that has higher translation quality than the full translation. For this, we combine the state of the art English summarization system and a novel framework for prediction of MT quality without references. Our framework is composed of the following major components: (a) a standard machine translation system, (b) a reference-free MT quality estimation system, (c) an MT-aware summarization system, and (d) an English-Arabic sentence matcher. More specifically, our English-Arabic system reads in an English document along with its baseline Arabic translation and outputs, as a summary, a subset of the Arabic sentences based on their informativeness and their translation quality. We demonstrate the utility of our system by evaluating it with respect to its translation and summarization quality and demonstrate that we can balance between improving MT quality and maintaining a decent summarization quality. For summarization, we conduct both reference-based and reference-free evaluations and observe a performance in the range of the state of the art system. Moreover, the translation quality of the summaries shows an important improvement against the baseline translation of the entire documents. This MT-aware summarization approach can be applied to translation of texts such as Wikipedia articles. For such domain-rich articles, there is a large variation of translation quality across different sections. An intelligent reduction of the translation tasks results in improved final outcome. Finally, the framework is mostly language independent and can be easily customized for different target languages and domains.
-
-
-
Distributed algorithms in wireless sensor networks: An approach for applying binary consensus in large testbeds
More LessOur work represents a new starting point for a wireless sensor network implementation of a cooperative algorithm called the binary consensus algorithm. Binary consensus is used to allow a collection of distributed entities to reach consensus regarding the answer to a binary question and the final decision is based on the majority opinion. Binary consensus can play a basic role in increasing the accuracy of detecting event occurrence. Existing work on binary consensus focuses on simulation of the algorithm in a purely theoretical sense. We have adapted the binary consensus algorithm for use in wireless sensor networks. This is achieved by specifying how motes find partners to update state with as well as by adding a heuristic for individual motes to determine convergence. In traditional binary consensus, individual nodes do not have a stop condition, meaning nodes continue to transmit even after convergence has occurred. In WSNs however, this is unacceptable since it will consume power. So in order to save power sensor motes should stop the communication when the whole network converges. For that reason we have designed a tunable heuristic value N that will allow motes to estimate when convergence has occurred. We have evaluated our algorithm successfully in hardware using 139 IRIS sensor motes and have further supported our results using the TOSSIM simulator. We were able to minimize the convergence time reaching optimal results. The results also show that as the network gets denser the convergence time will lower. In addition, convergence speed depends on the number of motes present in the network. During the experiments none of the motes failed and our algorithm converged correctly. The hardware as well as the simulation results demonstrate that the convergence speed depends on the topology type, the number of motes present in the network, and the distribution of the initial states.
-
-
-
Silicon radio-frequency integrated-circuits for advanced wireless communications and sensing
By Cam NguyenSilicon-based Radio-Frequency Integrated Circuit (RFIC) hardware is the backbone of advanced wireless communication and sensing systems, enabling low-cost, small-size, and high-performance single-chip solution. Advanced RF wireless systems and in turn, silicon RFICs, are relevant not only to commercial and military applications, but also to national infrastructures. This importance is even more pronounced as the development of civilian technologies becomes increasingly important to the national economic growth. New applications utilizing silicon RFIC technologies continue to emerge - spanning across spectrums - from ultra-wideband to millimeter-wave and submillimeter-wave ultra-high-capacity wireless communications; from sensing for airport security and inventory for gas and oil; and from detection and inspection of buried underground oil and gas pipes to wireless power transmission and data communications for smart wells. In this talk, we will present some of our recent developments of silicon RFICs for advanced wireless communications and sensing.
-
-
-
PLATE: Problem-based learning authoring and transformation environment
More LessThe Problem-based Learning Authoring and Transformation Environment (PLATE) project seeks to improve student learning using innovative approaches to problem-based learning (PBL) in a cost-effective, flexible, interoperable, and reusable manner. Traditional subject-based learning that focuses on passively learning facts and reciting them out of context is no longer sufficient to prepare potential engineers and all students to be effective. Within the last two decades, the problem-based learning approach to education has started to make inroads into engineering and science education. This PBL educational approach comprises an authentic, ill-structured problem with multiple possible routes to multiple possible solutions. The PBL approach offers unscripted opportunities for students to identify personal knowledge gaps as starting points for individual learning. Additionally, it requires a facilitator (not a traditional teacher) who guides learning by asking probing questions that model expert cognitive reasoning and problem solving strategies. Bringing real-life context and technologies into the curriculum through a problem based learning approach encourages students to become independent workers, critical thinkers, problem solver, lifelong learners, and team workers. A systematical approach to support online PBL is the use of a pedagogy-generic e-learning platform such as IMS Learning Design (IMS-LD 2003), which is an e-learning technical standard useful to script a wide range of pedagogical strategies as formal models. This PLATE project uses the IMS-DL strategies. It seeks to research and develop a process modeling approach together with software tools to support the development and delivery of face-to-face, online, and hybrid PBL courses or lessons in a cost-effective, flexible, interoperable, and reusable manner. The research team seeks to prove that the PLATE authoring system optimizes learning and that the PLATE system improves learning in PBL activities. For this poster presentation, the research team will demonstrate the progress it has made within the first year of research. This includes the development of a prototype PBL scripting language to represent a wide range of PBL models, the creation of transformation functions to map PBL models represented in the PBL scripting language into the executable models represented in IMS-LD, and the architecture of the PLATE authoring tool. The research team plans to illustrate that the research and development of a PBL scripting language and the associated authoring and execution environment can provide a significant thrust toward further research of PBL by using meta-analysis, designing effective PBL models, and extending or improving a PBL scripting language. The team believes that PBL researchers can use the PBL scripting language and authoring tools to create, analyze, test, improve, and communicate various PBL models. The PLATE project can enable PBL practitioners to develop, understand, customize, and reuse PBL models at a high level by relieving the burdens of handling complex details to implement a PBL course. The research team believes that the project will stimulate the application and use of PBL in curricula with online learning practice by incorporating PBL support into popularly used e-learning platforms and by providing a repository of PBL models and courses.
-
-
-
Advance In Adaptive Modulation For Fading Channels
More LessSmart phones are becoming dominant handsets that are available to wireless technology users. Wireless access to internet is also becoming the default scenario with vast majority of internet users. The increasing demand for high speed wireless internet services makes current technologies meet their limit due to channel impairments. The conventional adaptive modulation technique (CAMT) is no longer helpful due to high data rate requirements of new technologies and wireless video streaming in addition to other applications such as downloading large files. The CAMT is one of the developed approaches that are considered a powerful techniques that are currently being used in advanced wireless communication systems such as long term evolution (LTE) technology. This technique is used to enhance energy efficiency and increase spectral efficiency of wireless communication systems over fading channels. The CAMT dynamically changes modulation schemes based on channel conditions to maximize throughput with minimum bit error rate (BER) based on channel state information of each user, which is sent back to transmitter by receiver side via reliable channel. The CAMT is based on predefined set of ranges signal to noise ratios (SNRs) for different orders of modulation schemes. The more increase in SNRs lead to higher level of ranges of SNRs set that lead to a higher modulation order. This will allow to higher transmission speed utilizing the good channel condition. In order to minimize BER, when channel condition degrades, modulation order is reduced, which result to lower spectral efficiency but more robust modulation scheme. The dynamicity of changing modulation order based on SNRs ranges of radio channel is the key part of CAMT to increase throughput and minimize the BER. However, this work proposes an advance in AMT that is based on utilizing more channel state information in addition to SNR ranges. The particular new information is related to how sever fading the channel experiences. The amount of severity in this work is measured with amount of fading (AF), which is computed by using the first and second central moments of envelope amplitude. This additional information helps to distinguish between different channel conditions that have same average set of SNR ranges but different levels of fading severity that may be utilized to increase performance of CAMT. The different levels of fading severity and similar sets of SNR ranges have been tested with Nakagami-m fading channels. The AF measure of fading severity is equal to 1/m in this radio fading channels. So, the investigation in this work is based on testing how to leverage the AF dimension in addition to the conventional approach used in CAMT. In this work we show that the BER of different modulation schemes depends on fading amount for every range of SNR defined by sets of AMT. Current results show dramatic improvements in BER performance and throughput when AF is leveraged with SNRs range set approach defined in CAMT. Utilization of AF with SNR ranges allow adapting higher modulation order in channel conditions that were not possible with conventional AMT.
-
-
-
Real-time multiple moving vehicle detection and tracking framework for autonomous UAV monitoring of urban traffic
More LessUnmanned Aerial Vehicles (UAVs) have the potential to provide comprehensive information for traffic monitoring, road conditions and emergency response. However, to enable autonomous UAV operations, video images captured by UAV cameras are processed by using state-of-the art algorithms for vehicle detection, recognition, and tracking. Nevertheless, processing of aerial UAV images is challenging due to the fact that the images are usually captured with low-resolution cameras, from high altitudes, and the UAV is in continuous motion. The latter enforces the need for decoupling the camera and scene motions and most of the techniques for moving vehicle detection perform ego motion compensation to separate camera motion from scene motion. To this end, registration of successive image frames is first carried out to match two or more images of the same scene taken at different times followed by moving vehicle labeling. Detected vehicles of interest are routinely tracked by the UAV. However, vehicle tracking in UAV imagery is challenging due to constantly changing camera vantage points, changes in illumination, and occlusions. The majority of the existing vehicle detection and tracking techniques suffer from reduced accuracy and/or entail intensive processing that prohibits their deployment onboard of UAVs unless intensive computational resources are utilized. This paper presents a novel multiple moving vehicle detection and tracking framework that is suitable for UAV traffic monitoring application. The proposed framework executes in real-time with improved accuracy and is based on image feature processing and projective geometry. FAST image features are first extracted and then outlier features are computed by using least median square estimation. Moving vehicles are subsequently detected with density-based spatial clustering algorithm. Vehicles are tracked by using Kalman filtering while an overlap-rate-based data association mechanism followed by tracking persistency check are used to discriminate between true moving vehicles and false detections. The proposed framework doesn't involve the explicit application of image transformations, i.e. warping, to detect potential moving vehicles which reduces computational time and decreases the probability of having wrongly detected vehicles due to registration errors. Furthermore, the use of data association to correlate detected and tracked vehicles along with the selective target's template update that's based on the data association decision, significantly improves the overall tracking accuracy. For quantitative evaluation, a testbed has been implemented to evaluate the proposed framework on three datasets: The standard DARPA Eglin-1 and the RedTeam datasets, and a home-collected dataset. The proposed framework achieves recall rates of 97.1 and 96.8 (average 96.9%), and precision rates of 99.1% and 95.8% (average 97.4%) for the Eglin-1 and RedTeam datasets, respectively, with overall average precision of (97.1%). And when evaluated on the dataset collected, it achieved 95.6% recall and 96.3% precision. Compared to other moving vehicle detection and tracking techniques found in the literature, the proposed framework achieves higher accuracy on average and is less computationally demanding. The quantitative results thus demonstrate the potential of the proposed framework for autonomous UAV traffic monitoring applications.
-
-
-
Mental task discrimination: Digital signal processing
More LessAbstract Objectives in this research is to increase the accuracy of the distinction between functions of various Kulaip through a careful analysis of brain electrical signals and electrical signals are out of the brain and can be received from sensors, especially in plants and solid enlarge and stored. It is characteristic of brain signals Abbaas Electrical signals are weak force and the Pacific area affected by humans, so it is also loaded with signal noise changed from its true value. Of my goals in this research is to arrive to delete possible from the noise signal from the brain signals Electrical and represented in a precise manner by factors and components unique to each signal and then train the system to these signals and it means the function of certain mental, and after the training system signals available for training begin the testing phase where the system will signal a new mentality, including the system by comparing the store to his posts, and then classified into one of mental functions that are stored for him. I have in this research to distinguish between five mental functions are: Baseline task1-when a person is completely relaxed in the case of Multiplication task2-mind when it calculates the multiplication operation is not simple Letter composing task3-imagine when the person writing and the formation of a character in his mind Rotation task4-When the person imagine a three-dimensional model of rotation around the axis Counting task 5 - when you imagine the person that writes numbers in sequence This means that one of the 83.7813% and thank God I was able to distinguish the origin of the five functions correctly by the health 100 electrical signal to the brain can identify correctly the 84 signal and we determine the brain function that she made this reference. There are high hopes might build on this research where possible to increase the number of functions of the mind which distinguishes them we reach the distinction between all functions of the mind and know what to think of human, and also the feelings, and also of the objectives of this research is to try to help people with special needs through know what is going on in their minds and meet their needs. I hope to benefit from this research all interested in this area and reach a new height in the analysis and understanding of the functions of the human mind and offer all support to fellow human with special needs.
-
-
-
Macro-small handover in LTE under energy constraints
By Nizar ZorbaGreen communications have emerged as one of the most important trends in wireless communications because of its several advantages of interference reduction, battery life increase and electrical bill cut. Its application to handover mechanisms is a crucial operation for its integration in practical systems, as handover is one of the most resource-consuming operations in the system, and it has to be optimized under the objective of green communications. On the other hand, a decrease in the energy consumption should not mean lower performance for the operator and customers. Therefore, this work will present a hybrid handover mechanism where two conflict objectives of load balancing and energy consumption are tackled, where the operator's objective is to balance the data load among its macro and small-cells, while the user equipment objective is to decrease the consumed energy in order to guarantee a larger battery life.
-
-
-
A concurrent tri-band low-noise amplifier for multiband communications and sensing
By Cam NguyenConcurrent multiband receivers receive and process multiple frequency bands simultaneously. They are thus capable of providing multitask or multifunction to meet consumer needs in modern wireless communications. Concurrent multiband receivers require at least some of their components to operate concurrently at different frequency bands which results in substantial reduction of cost and power dissipation. Fig. 1 shows a simplified concurrent multiband receiver, typically consisting of an off-chip antenna and on-chip low-noise amplifier (LNA) and mixer. While the mixer can be designed as a multiband or wideband component, the LNA should perform as a concurrent multiband device and hence requires proper input matching to the antenna, low noise figure (NF), high gain and high linearity to handle multiple input signals simultaneously. Therefore, the design of concurrent multiband LNA's is the most critical issue for implementation of fully integrated low-cost and low-power concurrent multi-band receivers. In this talk, we present a 13/24/35-GHz concurrent tri-band LNA implementing a novel tri-band load composed of two feedback notch filters. The tri-band load is composed of two passive LC notch filters with feedback. The tri-band LNA fabricated on a 0.18-μm SiGe BiCMOS process achieves power gain of 22.3/24.6/22.2 dB at 13.5/24.5/34.5 GHz, respectively. It has the best noise figure of 3.7/3.3/4.3 dB and IIP3 of -17.5/-18.5/-15.6 dBm in the 13.5/24.5/34.5 GHz pass-band, respectively. The tri-band LNA consumes 36 mW from a 1.8 V supply, and occupies 920 μm × 500 μm.
-
-
-
Hardware implementation of principal component analysis for gas identification systems on the Zynq SoC platform
More LessPrincipal component analysis (PCA) is a commonly used technique for data reduction in general as well as for dimensionality reduction in gas identification systems when a sensor array is being used. A complete PCA IP core for gas application has been designed and implemented on the Zynq programmable system on chip (SoC). The new heterogeneous Zynq platform with its ARM processor and programmable logic (PL) was used because it is becoming an interesting alternative to conventional field programmable gate array (FPGA) platforms. All steps of the learning and testing phases of PCA starting from the mean computation to the projection of data onto the new space, passing by the normalization process, covariance matrix and the eigenvectors computation were developed in C and synthesized using the new Xilinx VIVADO high level synthesis (HLS) tool. The eigenvectors were found using the Jacobi method. The implemented hardware of the presented PCA algorithm for a 16×30 matrix was faster than the software counterpart with a speed up of 1.41 times when executed on a desktop running a 64-bit Intel i7-3770 processor at 3.40GHz. The implemented IP core consumed an average of 23% of all on chip resources. The PCA algorithm used in the learning phase is to be executed first for the system to be trained to a specific data set and then produce the related vectors of means along with the eigenvectors that will be used in the testing part. The PCA algorithm used in the testing phase will also be used in real time identification. For testing purpose, a data set that represents the output of a 16-array gas sensor when exposed to three types of gases (CO, Ethanol and H2) in ten different concentrations (20, 40, 60, 80, 120, 140, 160, 180 and 200ppm) was used. The aim was to reduce the 30 samples of 16 dimensions to 30 vectors of 2 or 3 dimensions data depending on the need. The combination between the Zynq platform and the HLS tool showed many benefits, using Vivado HLS resulted in a considerable gain in terms of time spent on prototyping and this is due to the fact that the design was specified in a high level language such C or C++ and not a hardware description language such as VHDL or Verilog. While using the Zynq platform highlighted some interesting advantages compared with conventional FPGA platforms such us the possibility to split the design, executing the simple part in a software manner on the ARM processor and leaving the complex one for hardware acceleration. It is planned to further optimize the IP core using the Vivado HLS directives, the developed core is to be used in a larger gas identification system for dimensionality reduction purpose. The larger gas identification system will be used to identify a given gas and estimate its concentration and will be part of a Low Power Reconfigurable self-calibrated Multi‐Sensing Platform.
-
-
-
Sonar placement and deployment in a maritime environment
By Selim BoraGiven a water-terrain area of interest (waterways, ports, etc.), this paper attempts to efficiently allocate underwater sonars to achieve a reasonable amount of coverage within a limited budget. Coverage is defined as the capability of sonars to detect threats. Though total coverage is desired, priority is given to the criticality/importance attached to the location of an area of interest on a grid-based system. Unlike other works in the literature, the developed model takes into consideration uncertainty inherent in the detection probability of sonars. Apart from issues of sonar reliability, underwater terrain, with its changing conditions, is bound to affect detection probabilities. While taking into consideration the specific physics of sonars in the model development, the model also adopts a hexagonal grid-based system to ensure more efficient placement of sonars. Based on an initially proposed mixed-integer program, a robust optimization model also is proposed to take care of uncertainties. With smaller scale problems, the model works adequately within a relatively short time period. However, large scale problems require extensive memory, taking much longer. As such, a heuristic is proposed as an alternative to the proposed model. Experimental results indicate the heuristic works effectively under most circumstances and performs less effectively under a few limited scenarios.
-
-
-
Cloud-based development life cycle: Software testing as service
More LessCloud computing is an emerging paradigm, which is changing the way computing resources are provisioned, accessed, utilized, maintained and managed. The SWOT analysis for the Cloud is depicted in Table 1. Cloud computing is increasingly changing the way software products and services are produced and consumed; thereby implying the need for a change in the ways, methods, tools and concepts by which these products are tested. Software testing is an important quality control activity stage within the Software Development Lifecycle. Software testing involves both function (eg. bugs) and non-functional testing (eg. regression). It verifies and validates the finished product to ensure that development effort meets up with requirements specification. This process often requires consumption of resources over a limited period of time. These resources could be costly, or, not readily available, which in turn, can have an effect on the efficiency of the testing process. Though this process is important, nevertheless it is not a business critical process because it does not contain overly sensitive business data, which makes it an ideal case for migration to the cloud. The increasing complexity and distribution of teams, applications, processes and services, along with the need for adequate testing approaches for cloud-based applications and services creates a convincing case for the need for cloud-based software testing. Cloud-based testing or Software Testing as a Service is a new way of carrying out testing as a service, using the cloud as the underlying platform to provide on-demand software testing services via the internet. Table 2 below shows a SWOT analysis for cloud-based testing, from which a comparison between traditional software testing with cloud-based testing can be made and advantages of the cloud approach can be drawn. A number of major industrial players like IBM, HP, UTest, SOASTA, Sogetti, and SauceLabs, to mention a few, now offer various cloud-based testing services which presents a lot of advantages to customers. Though cloud-based testing presents a lot of advantages and benefits over traditional testing, it cannot overly replace traditional testing because areas of testing and scenarios of testing for synergy and trade-offs exist. For example, some testing areas requiring implicit domain knowledge about the customer's business (like insurance business); or areas where hardware or software is an integral and essential part of the other and directly dependent on each other (like programmable logic controllers), may require the adoption of traditional testing practices over cloud-based testing. This represents an area for further research: developing intelligent/context aware Cloud-based testing services with the ability to recreate or mimic areas/scenarios requiring implicit domain knowledge. Furthermore, there is a lack of adequate support tools cloud-based testing services. These tools include: self-learning test case libraries and tools for measuring cloud-based testing services. This paper will present our research efforts in the area of Cloud based collaborative software development life cycle, with particular focus on the feasibility of provisioning software testing as a cloud service. This research has direct industrial implication and holds huge research and business potentials.
-
-
-
Grand Challenges For Sustainable Growth: Irish Presidency Of The Eu Council 2013 In Review
By Soha MaadThe presetation will overview the outcomes of the Irish Presidency of the EU Council for 2013 in addressing grand challenges for sustainable Growth with special emphasis on the digital (IT) agenda for Europe. The Irish Presidency of the EU council for 2013 made great achievements including the agreement of the 7 years EU Budget (including the budget to tackle youth unemploymen, the €70 billions Horizon 2020 program for research and innovation; the €30 billions budget for connecting Europe facility targeting enhancement in transport, energy and telecoms; and the €16 billions budget for the Erasmus programme), the reform of Common Agriculture Policy (CAP) and Common Fisheries Policy (CFP), and the brokerage of various partnerships and trade agreements (the most important ones are the EU-US agreement of €8 billions, and the EU-Japan agreement). An estimated number of 200 policy commitments were achieved including more than 80 in legislative form. The presentation will put a particular emphasis on the digital agenda for Europe and the horizon for international collaboration to tackle grand challenges for sustainable growth and the application of ICT to address these challenges. A brief overview of key related events held during the Irish Presidency of the EU council will be covered and the announcement of a book launch event elaborating on the topic and content of the presentation will be made.
-
-
-
ChiQat: An intelligent tutoring system for learning computer science
More LessFoundational topics in Computer Science (CS), such as data structures and algorithmic strategies, pose particular challenges to learners and teachers. The difficulty of learning these basic concepts often discourages students from further study, and leads them to lower success. In any discipline, students' interaction with skilled tutors is one of the most effective strategies to address the problem of weak learning. However, human tutors are not always available. Technology can compensate here: Intelligent Tutoring Systems (ITSs) are systems designed to simulate the teaching of human tutors. ITSs use artificial intelligence techniques to guide learners through problem solving exercises using pedagogical strategies similar to those employed by human tutors. ChiQat-Tutor, a novel ITS that we are currently developing, aims at facilitating learning of basic CS data structures (e.g., linked lists, trees, stacks) and algorithmic strategies (e.g., recursion). The system will use a number of effective pedagogical strategies in CS education, including positive and negative feedback, learning from worked-out examples, and learning from analogies. ChiQat will support linked lists, trees, stacks, and recursion. The ChiQat linked list module builds on iList, our previous ITS that provenly helps students learn linked lists. This module provides learners with a simulated environment where linked lists can be seen, constructed, and manipulated. Lists are represented graphically and can be manipulated interactively using programming commands (C++ or Java). The system can currently follow the solution strategy of a student, and provides personalized positive and negative feedback. Our next step is to add support for worked-out examples. The recursion module provides learners with an animated and interactive environment, where they can trace recursion calls of a given recursive problem. This is one of the most challenging tasks students face when learning the concept of recursion. In the near future, students will be aided in breaking down recursive problems into their basic blocks (base case and recursive case) through interactive dialogues. ChiQat employs a flexible, fault-tolerant, distributed plug-in architecture, where each plug-in fulfills a particular role. This configuration allows different system types to be defined, such as all-in-one applications or distributed ones. The system is composed of separate front and back ends. The back-end will house the main logic for heavy computational tasks such as problem knowledge representation and tutor feedback generation. The front-end (user interface) collects users input and sends it to the back-end, while displaying the current state of the problem. Due to this flexible architecture, it will be possible to run the system in two modes; online and offline. Offline mode will run all client and server components on the same machine, allowing the user to use the system in a closed environment. The online mode will allow running the back-end on a server as a web service which can communicate with a front-end running on the client machine. This allows wider and greater reachability for users using lower powered connected devices such as mobile devices, as well as traditional laptops and desktop computers which are connected to the Internet.
-
-
-
Logic as a ground for effective and reliable web applications
More LessUnderstanding modern query languages provides key insights for the design of secure, effective and novel web applications. With the ever expanding volume of web data, two data shapes have clearly emerged as flexible ways of representing information: trees (such as most XML documents) and graphs (such as sets of RDF triples). Web applications that process, extract and filter such input data structures often rely on query languages such as XPath and SPARQL for that purpose. This has notably triggered research initiatives such as NoSQL aimed towards a better understanding and more effective implementations of these languages. In parallel, the increasing availability of surging volumes of data urges the finding of techniques to make these languages scale in order to query data of higher orders of magnitude in size. The development of big-data-ready efficient and scalable query evaluators is challenging in several interdependent aspects: one is parallelization -- or how to evaluate a query by leveraging a cluster of machines. Another critical aspect consists in finding techniques for placing data on the cloud in a clever manner so as to limit data communication and thus diminish the global workload. In particular, one difficulty resides in optimizing data partitioning for the execution of subqueries, possibly taking into account additional information on data organization schemes (such as XML Schemas or OWL descriptions). At the same time, growing concerns about data privacy urge the development of analyzers for web data access control policies. We believe that static analysis of web query languages will play an increasingly important role especially in all the aforementioned situations. In this context, we argue that modal logic can give useful yardsticks for characterizing these languages in terms of expressive power and also in terms of complexity for the problem of query answering and for the problems of static analysis of queries. Furthermore, model-checkers and satisfiability-checkers for modal logics such as the mu-calculus can serve as a robust ground for respectively designing scalable query evaluators and powerful static analyzers.
-
-
-
On green planning and management of cellular networks in urban cities
By Zaher DawyEnergy is becoming a main concern nowadays due to the increasing demands on natural energy resources. Base stations (BS) consume up to 80% of the total energy expenditure in a cellular network. The energy-efficiency of the BSs decreases significantly at off-peak hours since the power amplifiers' energy-efficiency degrades at lower output power. Thus, power savings methods should focus on the access network level by trying to manipulate the BSs power consumption. This could be done by reducing the number of active elements (e.g., BSs) in the network for lower traffic states by switching some BSs off. In this case, network management should allow smooth transition between different network topologies based on the traffic demands. In this work, we evaluate a green radio network planning approach by jointly optimizing the number of active BSs and the BS on/off switching patterns based on the changing traffic conditions in the network in an effort to reduce the total energy consumption of the BSs. Planning is performed based on two approaches: a reactive and a proactive approach. In the proactive approach, planning will be performed starting with the lowest traffic demand until reaching the highest traffic demand whereas in the reactive approach, the reverse way is considered. Performance results are presented for various case studies and evaluated taking into account practical network planning considerations. Moreover, we present real planning results in an urban city environment using the ICS telecom tool from ATDI in order to perform coverage calculations and analysis for LTE networks.
-
-
-
Cooperative relaying for idle band integration in spectrum sharing systems
By Syed HussainRecent developments in wireless communications and the emergence of high data rate services have consumed almost all the accessible spectrum making it a very scarce radio resource. Spectrum from very low frequencies to several GHz range has been either dedicated to a particular service or licensed to its providers. It is very difficult to find sufficient bandwidth for new technologies and services within accessible spectrum range. Contrarily, studies in different parts of the world reveal that the licensed and/or dedicated spectrum is underutilized leaving unused bands at different frequencies. These idle bands; however, cannot be used by non-licensed users due to current spectrum management practices throughout the world. This fact forced the regulatory authorities and academia to rethink the spectrum allocation policies. This resulted in the idea of spectrum sharing systems, generally known as cognitive radio, in which non-licensed or secondary users can access the spectrum licensed to the primary users. Many techniques and procedures have been suggested in the recent years for smooth and transparent spectrum sharing among the primary and secondary users. The most common approach suggests that the secondary users should perform spectrum sensing to identify the unused bands and exploit them for their own transmission. However, as soon as the primary user becomes active in that band, secondary transmission should be switched off or moved to some other idle band. A major problem faced by the secondary users is that the average width of the idle bands available at different frequencies is not large enough to support high data rate wireless applications and services. A possible solution is to integrate few idle bands together to generate a larger bandwidth. This technique is also known as spectrum aggregation. Generally, it is proposed to build the transmitter with multiple radio frequency chains which are activated according to the availability of idle bands. A combiner or aggregator is then used to transmit the signal through the antenna. Similarly, a receive antenna can be realized through multiple receive RF chains through a separator or splitter. Another option is to use orthogonal frequency division multiplexing in which sub-carriers can be switched on and off based on unused and active primary bands, respectively. These solutions are developed and analyzed for direct point to point links between the nodes. In this work, we analyze spectrum aggregation for indirect links through multiple relays. We propose a simple mechanism for idle band integration in a secondary cooperative network. Few relays in the system partly facilitate the source to aggregate available idle bands and collectively all the involved relays provide an aggregated larger bandwidth for the source to destination link. We analyze two commonly used forwarding schemes at the relays; namely, amplify-and-forward and decode-and-forward. We focus on outage probability of the scheme and derive a generalized closed form expression applicable to both scenarios. We analyze the system performance under different influential factors and reveal some important trade-offs.
-
-
-
PhosphoSiteAnalyzer: Analyzing complex cell signalling networks
More LessPhosphoproteomic experiments are routinely conducted in laboratories worldwide, and because of the fast development of mass spectrometric techniques and efficient phosphopeptide enrichment methods, life-science researchers frequently end up having lists with tens of thousands of phosphorylation sites for further interrogation. To answer biologically relevant questions from these complex data sets, it becomes essential to apply computational, statistical, and predictive analytical methods. Recently we have provided an advanced bioinformatic platform termed “PhosphoSiteAnalyzer” to the scientific community to explore large phosphoproteomic data sets that have been subjected to kinase prediction using the previously published NetworKIN algorithm. NetworKIN applies sophisticated linear motif analysis and contextual network modeling to obtain kinase-substrate associations with high accuracy and sensitivity. PhosphoSiteAnalyzer provides an algorithm for retrieval of kinase predictions from the public NetworKIN webpage in a semi-automated way and applies hereafter advanced statistics to facilitate a user-tailored in-depth analysis of the phosphoproteomic data sets. The interface of the software provides a high degree of analytical flexibility and is designed to be intuitive for most users. Network biology and in particular kinase-substrate network biology provides an adequate conceptual framework to describe and understand diseases and for designing targeted biomedicine for personalized medicine. Hence network biology and network analysis are absolutely essential to translational medical research. PhosphoSiteAnalyzer is a versatile bioinformatics tool to decipher such complex networks and can be used in the fight against serious diseases such as psychological disorders, cancer and diabetes that arise as a result of dysfunctional cell signalling networks.
-
-
-
Opportunistic Cooperative Communication Using Buffer-Aided Relays
More LessSpectral efficiency of a communication system refers to the information rate that the system can transmit reliably over the available bandwidth (spectrum) of the communication channel. Enhancing the spectral efficiency is without doubt a major objective for the designers of next generation wireless systems. It is evident that the telecommunications industry is rapidly growing due to the high demands for ubiquitous connectivity and the popularity of high data rate multimedia services. As well-known, wireless channels are characterized by temporal and spectral fluctuations due to physical phenomena such as fading and shadowing. A well-established approach to exploit the variations of the fading channel is opportunistic communication, which means transmitting at high rates when the channel is good and at low rates or not at all when the channel is poor. Furthermore, in the last few years, the research focus has turned into exploiting the broadcast nature of the wireless medium and the potential gains of exploiting the interaction (cooperation) between neighboring nodes in order to enhance the overall capacity of the network. Cooperative communication will be one of the major milestones in the next decade for the emerging fourth and fifth generation wireless systems. Cooperative communication can take several forms such as relaying the information transmitted by other nodes, coordinated multi-point transmission and reception techniques, combining several information flows together using network coding in order to exploit side information available at the receiving nodes, and interference management in dense small cell networks and cognitive radio systems to magnify the useful information transmission rates. We propose to exploit all sources of capacity gains jointly. We want to benefit from old, yet powerful, and new transmission techniques. Specifically, we want to examine optimal resource allocation and multiuser scheduling in the context of the emerging network architectures that involve relaying, network coding and interference handling techniques. We like to call this theme opportunistic cooperative communication. With the aid of opportunistic cooperative communication we can jointly exploit many sources of capacity gains such as multiuser diversity, multihop diversity, the broadcast nature of the wireless medium and the side-information at the nodes. We suggest exploring opportunistic cooperative communication as the choice for future digital communications and networking. In this direction, we introduce the topic of buffer-aided relaying as an important enabling technology for opportunistic cooperative communication. The use of buffering at the relay nodes enables storing the received messages temporarily before forwarding them to the destined receivers. Therefore, buffer-aided relaying is a prerequisite in order to apply dynamic opportunistic scheduling in order to exploit the channel diversity and obtain considerable throughput gains. Furthermore, these capacity gains can be integrated with other valuable sources of capacity gains that can be obtained using, e.g., multiuser scheduling, network coding over bidirectional relays, and interference management and primary-secondary cooperation in overlay cognitive radios systems. The gains in the achievable spectral efficiency are valuable and hence they should be considered for practical implementation in next generation broadband wireless systems. Furthermore, this topic can be further exploited in other scenarios and applications.
-
-
-
Security-Smart-Seamless (Sss) Public Transportation Framework For Qatar Using Tv White Space (Tvws)
More LessThe present Qatarian government has a long term vision of introducing intelligent transport, logistics management and road safety services in Qatar. Studies have shown that the public transport system in Qatar, and Doha in particular, is developing, but is not yet as comprehensive as in many renowned world cities (Pernin et al., 2008; Henry et al., 2012). Furthermore, with hosting rights of FIFA 2022 World Cup being granted to Qatar, a seminar paper was recently discussed by 2030 Qatar National Vision aim at world-class transport system for Qatar which meets the requirements of the country's long term goal (Walker, 2013). The introduction of intelligent public transport system involves the incorporation of technology into transportation system so as to improve public safety, conserve resources through seamless transport hub and introduce smartness for maximum utility in public transportation. The aforementioned goals of 2030 Qatar National Vision can be achieved through TVWS technology. TVWS technology was created to make use of sparsely used VHF and UHF spectra bands as indicated in Figure 1. The project focuses on IEEE 802.22 to enhance the Security, Smart, Seamless (SSS) transportation system in Qatar as shown in Figure 2 below. It is sub-divided as: (i) Security- TVWS to provide surveillance camera in public bus and train system. The bus/train system will be fitted with micro-camera. The project will provide the city center management team the ability to monitor and track the public transportation system in-case of accident, terrorism and other social issues. (ii) Seamless-TVWS will be made available for anyone who can purchase a down/up converter terminal to access the internet services for free. The need for up/down converter arises because the current mobile devices operate in ISM bands, whereas TVWS operates in VHF/UHF bands. (iii) Smart-The city center management can seat in their office and take control of eventuality. If there is such a project in the past, it might be using satellite technology. We are all aware of the limitations of satellite technology such as round delay trip. (iv) Novelty- Spectrum sensing by using Grey prediction algorithm is proposed to achieve optimal result. From academic point of view, Grey prediction algorithm has been used in predicting handoff in cellular communication and in stock market prediction, with high prediction accuracy of 95~98% (Sheu and Wu, 2000; Kayacan et al. 2010). The proposal methodology is as shown in Figure 3. Wireless Rural Area Network (WRAN) with cell radius varies from 10 - 100 km leaning towards macro cell architecture. Hence, the roll-out will require less base station infrastructure. In addition, the VHF-UHF bands offer desirable propagation qualities when compared to other frequency bands thereby ensuring wider and reliable radio coverage.
-
-
-
Visualization Methods And Computational Steering Of The Electron Avalanches In The High-Energy Particle Detector Simulators
More LessThe traditional cycle in the simulation of the electron avalanches and any scientific simulation is to prepare input, execute a simulation, and to visualize the results as a post-processing step. Usually, such simulations are long running and computationally intensive. It is not unusual for a simulation to keep running for several days or even weeks. If the experiment leads to the conclusion that there is incorrect logic in the application, or input parameters were wrong, then simulation has to be restarted with correct parameters. A most common method of analyzing the simulation results is to gather the data on disk and visualize after the simulation finishes. The electron avalanche simulations can generate millions of particles that can require huge amount of disk I/O. The disk being inherently slow can become the bottleneck and can degrade the overall performance. Furthermore, these simulations are commonly run on the supercomputer. The supercomputer maintains a queue of researchers' programs and executes them as time and priorities permit. If the simulation produces incorrect results and there is a need to restart it with different input parameters, it may not be possible to restart it immediately because supercomputer is typically shared by several other researchers. The simulations (or jobs) have to wait in the queue until they are given a chance to execute again. It increases the scientific simulation cycle time and hence reduces the researcher's productivity. This research work proposes a framework to let researchers visualize the progress of their experiments so they could detect the potential errors at early stages. It will not only enhance their productivity but will also increase efficiency of the computational resources. This work focuses on the simulations of the propagation and interactions of electrons with ions in particle detectors known as Gas Electron Multipliers (GEMs). However, the proposed method is applicable to any scientific simulation from small to very large scale.
-
-
-
Kernel collaborative label power set system for multi-label classification
More LessA traditional multi-class classification system assigns each example x a single label l from a set of disjoint labels L. However, in many modern applications such as text classification, image/video categorization, music categorization etc [1, 2], each instance can be assigned to a subset of labels Y ⊆ L. In text classification, news document can cover several topics such as the name of movie, box office ratings, and/or critic reviews. In image/video categorization, multiple objects can appear in the same image/video. This problem is known as multi-label learning. Figures 1 shows some examples of the multi-label images. Collaborative Representation with regularized least square (CRC-RLS) is a state-of-the-art face recognition method that exploits this collaborative representation between classes in representing the query sample [3]. The basic idea is to code the testing sample over a dictionary, and then classify it based on the coding vector. While the benefits of collaborative representation are becoming well established for face recognition or in general multi-class classification, the corresponding use for multi-label classification needs to be investigated. In this research, a kernel collaborative label power set multi-label classifier (ML-KCR) based on regularized least square principle is proposed. ML-KCR directly introduces the discriminative information of the samples using l2-norm\sparsity" and uses the class specified representation residual for classification. Further, in order to capture co-relation among classes, the multi-label problem is transformed using label power set which is based on the concept of handling sets of labels as single labels and thus allowing the classification process to inherently take into account the correlations between labels. The proposed approach is applied to six publicly available multi-label data sets from different domains using 5 different multi-label classification measures. We validate the advocated approach experimentally and demonstrate that it yields significant performance gains when compared with the state-of-the art multi-label methods. In summary, following are our main contributions * A kernel collaborative label powerset classifier (ML-KCR) based on regularized least square principle is proposed for multi-label classification. ML-KCR directly introduces the discriminative information and aim to maximize the margins between the samples of di_erent classes in each local area. * In order to capture correlation among labels, the multi-label problem is transformed using label powerset (LP). The main disadvantage associated with LP is the complexity issue arise due to many distinct label sets. We will show that this complexity issue can be avoided using collaborative representation with regularization. * We applied the proposed approach to publicly available multi-label data sets and compared with state-of-the-art multi-label methods. The proposed EML method is compared with the state-of-the-art multi-label classifiers: RAkEL, ECC, CLR, MLkNN, IBLR [2]. References [1] Tsoumakas, G., Katakis, I., Vlahavas, I., 2009. Data Mining and Knowledge Discovery Handbook. Springer, 2nd Edition, Ch. Mining Multilabel Data. [2] Zhang, M.-L., Zhou, Z.-H., 2013. A review on multi-label learning algorithms. IEEE Transactions on Knowledge and Data Engineering (preprint). [3] Zhang, L., Yang, M., 2011. Sparse representation or collaborative representation: Which helps face recognition? In: IEEE International Conference on Computer Vision (ICCV).
-
-
-
Simultaneous estimation of multiple phase information in a digital holographic configuration
More LessThe automated and simultaneous extraction of multiple phase distributions and their derivatives continue to pose major challenges. A possible reason is the lack of proper data processing concepts to support the multiple wave mixing that needs to be introduced to make the configuration at a time sensitive to multiple phase components and yet be able to decrypt each component of the phase efficiently and robustly, in absence of any cross-talk. The paper demonstrates a phase estimation method for encoding and decoding the phase information in a digital holographic configuration. The proposed method relies on local polynomial phase approximation and subsequent state-space formulation. The polynomial approximation of phase transforms multidimensional phase extraction into a parameter estimation problem, and the state-space modeling allows the application of Kalman filtering to estimate these parameters. The prominent advantages of the method include high computational efficiency, ability to handle rapid spatial variations in the fringe amplitude, and non-requirement of two-dimensional unwrapping algorithms. The performance of the proposed method is evaluated using numerical simulation.
-
-
-
Automatic long audio alignment for conversational Arabic speech
More LessLong Audio Alignment is a known problem in speech processing in which the goal is to align a long audio input with the corresponding text. Accurate alignments help in many speech processing tasks such as audio indexing, speech recognizer's acoustic model training, audio summarizing and retrieving, etc. In this work, we have collected more than 1400 hours of conversational Arabic speech extracted from Al-Jazeerah podcasts besides the corresponding non-aligned text transcriptions. Podcast's length varies from 20-50 minutes each. Five episodes have been manually aligned that meant to be used in evaluating alignment accuracy. For each episode, a split and merge segmentation approach is applied to segment audio file into small segments of average length of 5 sec. having filled pauses on the boundary of each segment. A pre-processing stage in applied on the corresponding raw transcriptions to remove titles, headings, images, speaker's names, etc. A biased language model (LM) is trained on the fly using the processed text. Conversational Arabic speech is mostly spontaneous and influenced by dialectal Arabic. Since phonemic pronunciation modeling is not always possible for non-standard Arabic words, a graphemic pronunciation model (PM) is utilized to generate one pronunciation variant for each word. Unsupervised acoustic model adaptation in applied on a pre-trained Arabic acoustic model using the current podcast audio. The adapted AM along with the biased LM and the graphemic PM are used in a fast speech recognition pass applied on the current podcast's segments. Recognizer's output is aligned with the processed transcriptions using Levenshtein distance algorithm. This way we can ensure error recovery where miss-alignment of a certain segment does not affect alignment of later segments. The proposed approach resulted in an alignment accuracy of 97% on the evaluation set. Most of miss-alignment errors were found to be with segments having significant background noise (music, channel noise, cross-talk, etc.) or significant speech disfluencies (truncated words, repeated words, hesitations, etc.). For some speech processing tasks like acoustic model training, it is required to eliminate miss-aligned segments from the training data. That is why a confidence scoring metric is proposed to accept/reject aligner output. The score is provided for each segment and it is basically the Min-Edit distance between recognizer's output and the aligned text. By using confidence scores, it was possible to reject the majority of miss-aligned segments resulting in 99% alignment accuracy. This work was funded by a grant from the Qatar National Research Fund under its National Priorities Research Program (NPRP) award number NPRP 09-410-1-069. Reported experimental work was performed at Qatar University in collaboration with University of Illinois.
-
-
-
Simultaneous fault detection, isolation and tracking design using a single observer-based module
By Nader MeskinFault diagnosis (FD) has received much attention for complex modern automatic systems such as car, aircraft, rockets, unmanned vehicles, and so on since 1970s. In the FD research field, the diagnostic systems are often designed separately from the control algorithms, although it is highly desirable that both the control and diagnostic modules are integrated into one system module. Hence, the problem of simultaneous fault detection and control (SFDC) has attracted a lot of attention in the last two decades, both in research and application domains. The simultaneous design unifies the control and detection units into a single unit which results in less complexity as compared with the case of separate design; so, it is a reasonable approach. However, the current literature in the field of SFDC suffers from the following limitations and drawbacks. First, most of the literature that considers the problem of SFDC, can achieve the control objective of "regulation" but none of them consider the problem of "tracking" in SFDC design. Therefore, considering the problem of tracking in SFDC design methodology is of great significance and importance. Second, although most of the current references in the field of SFDC can achieve acceptable fault detection, they cannot achieve fault isolation. Hence, although there are certain published works in the field of SFDC, none of them is capable of detecting and isolating simultaneous faults in the system as well as tracking the specified reference input. In this paper, the problem of simultaneous fault detection, isolation and tracking (SFDIT) design for linear continuous-time systems is considered. An H_infty/H_index formulation of the SFDIT problem using a dynamic observer detector and state feedback controller is developed. Indeed, a single module based on dynamic observer is designed which produces two signals, namely the residual and the control signals. The SFDIT module is designed such that the effects of disturbances and reference inputs on the residual signals are minimized (for accomplishing fault detection) subject to the constraint that the transfer matrix function from the faults to the residuals is equal to a pre assigned diagonal transfer matrix (for accomplishing fault isolation), while the effects of disturbances, reference inputs and faults on the specified control output are minimized (for accomplishing fault-tolerant control and tracking problems). Sufficient conditions for solvability of the problem are obtained in terms of linear matrix inequality (LMI) feasibility conditions. On the other hand, it is shown that by applying our methodology, the computational complexity from the view point of the number and size of required observers is significantly reduced in comparison with all of the existing methodologies. Moreover, using this approach the system can not only detect and isolate the occurred faults but also able to track the specified reference input. The proposed method can also handle isolation of simultaneous faults in the system. Simulation results for an autonomous unmanned underwater vehicle (AUV) illustrate the effectiveness of our proposed design methodology.
-
-
-
Towards computational offloading in mobile device clouds
More LessWith the rise in mobile device adoption, and growth in mobile application market expected to reach $30 billion by the end of 2013, mobile user expectations for pervasive computation and data access are unbounded. Yet, various applications, such as face recognition, speech and object recognition, and natural language processing, exceed the limits of standalone mobile devices. Such applications resort to exploiting larger resources in the cloud, which sparked researching problems arising from data and computational offloading to the cloud. Research in this area has mainly focused on profiling and offloading tasks to remote cloud resources, automatically transforming mobile applications by provisioning and partitioning its execution into offloadable tasks, and more recently, bringing computational resources (e.g. Cloudlets) closer to task initiators in order to save mobile device energy. In this work, we argue for environments in which computational offloading is performed among mobile devices forming what we call a Mobile Device Cloud (MDC). Our contributions are: (1) Implementing an emulation testbed for quantifying the potential gain, in execution time or energy consumed, of offloading tasks to an MDC. This testbed includes a client offloading application, an offloadee server receiving tasks, and a traffic shaper situated between the client and server emulating different communication technologies (Bluetooth 3.0, Bluetooth 4.0, WiFi Direct, WiFi, and 3G). Our evaluation for offloading tasks with different data and computation characteristics to an MDC registers up to 80% and 90% savings in time or energy respectively, as opposed to offloading to the cloud. (2) Providing an MDC experimental platform to enable future evaluation and assessment of MDC-based solutions. We create a testbed, shown in Figure 1, to measure the energy consumed by a mobile device when running or offloading tasks using different communication technologies. We build an offloading Android-based mobile application and measure the time taken to offload tasks, execute them, and receive the results from other devices within an MDC. Our experimental results show gains in time and energy savings, up to 50% and 26% respectively, by offloading within MDCs, as opposed to locally executing tasks. (3) Providing solutions that address two major MDC challenges. First, due to mobility, offloadee devices leaving an MDC would seriously compromise performance. Therefore, we propose several social-based offloadee selection algorithms that exploit contact history between devices, as well as friendship relationships or common interests between device owners or users. Second, we provide solutions for balancing power consumption by distributing computational load across MDC members to elongate and MDC's life time. This need occurs when users need to maximize the lifetime of an ensemble of devices that belong to the same user or household. We evaluate the algorithms we propose for addressing these two challenges using the real datasets that contain contact mobility traces and social information for conference attendees over the span of three days. Our results show the impact of choosing the suitable offloadee subset, the gain from leveraging social information, and how MDCs can live longer by balancing power consumption across their members.
-
-
-
QALB: Qatar Arabic language bank
More LessAutomatic text correction has been attracting research attention for English and some other western languages. Applications for automatic text correction vary from improving language learning for humans and reducing noise in text input to natural language processing tools to correcting machine translation output for grammatical and lexical choice errors. Despite the recent focus on some Arabic language technologies, Arabic automatic correction is still a fairly understudied research problem. Modern Standard Arabic (MSA) is a morphologically and syntactically complex language, which poses multiple writing challenges not only to language learners, but also to Arabic speakers, whose dialects differ substantially from MSA. We are currently creating resources to address these challenges. Our project has two components: first is QALB (Qatar Arabic Language Bank), a large parallel corpus of Arabic sentences and their corrections, and second is ACLE (Automatic Correction of Language Errors), an Arabic text correction system trained and tested on the QALB corpus. The QALB corpus is unique in that: a) it will be the largest Arabic text correction corpus available, spanning two million words; b) it will cover errors produced by native-speakers, non-native speakers, and machine translation systems; and c) it will contain a trace of all the actions performed by the human annotators to achieve the final correction. This presentation describes the creation of two major components of the project: the web-based annotation interface and the annotation guidelines. QAWI (QALB Annotation Web Interface) is our web-based, language-independent annotation framework used for manual correction of the QALB corpus. Our framework provides intuitive interfaces for annotating text, managing a large number of human annotators and performing quality control. Our annotation interface, in particular, provides a novel token-based editing model for correcting Arabic text that allows us to reliably track all modifications. We demonstrate details of both the annotation and the administration interfaces as well as the back-end engine. Furthermore, we show how this framework is able to speed up the annotation process by employing automated annotators to correct basic Arabic spelling errors. We also discuss the evolution of our annotation guidelines from its early developments through its actual usage for group annotation. The guidelines cover a variety of linguistic phenomena, from spelling errors to dialectal variations and grammatical considerations. The guidelines also include a large number of examples to help annotators understand the general principles behind the correction rules and not simply memorize them. The guidelines were written in parallel to the development of our web-based annotation interface and involved several iterations and revisions. We periodically provided new training sessions to the annotators and measured their inter-annotator agreement. Furthermore, the guidelines were updated and extended using feedback from the annotators and the inter-annotator agreement evaluations. This project is supported by the National Priority Research Program (NPRP grant 4-1058-1-168) of the Qatar National Research Fund (a member of the Qatar Foundation). The statements made herein are solely the responsibility of the authors.
-
-
-
Pfmt-Dnclcpa Theory For Ballistic Spin Wave Transport Across Iron-Gadolinium Nanojunctions Presenting Structural Interfacial Disorder Between Iron Leads
More LessIt is widely accepted at present that the electronics based information-processing technology has fundamental limitations. A promising alternative to electronic excitations are the spin waves on magnetically ordered systems, which usher a potentially powerful solution towards fabricating devices that transmit and process information (Khitun and Wang 2006). This approach to information-processing technology, known as magnonics, is rapidly growing (Kruglyak and Hicken 2006, Choi et al. 2007), and key magnonic components such as wave guides, emitters, nanojunctions and ï¬lters (Khater et al. 2011) are currently explored as basic elements of magnonic circuitry. This paper deals with the theory for ballistic spin wave transport across ultrathin iron-gadolinium nanojunctions, ..-Fe] [Gd]nML [Fe-.. , which are known to present structural interfacial disorder; n is the number of gadolinium monoatomic planes between the iron leads. It is shown that our PFMT-DNCLCPA theory gives a detailed and complete analysis for the properties of the ballistic transmission, and the corresponding reflection and absorption spectra across the structurally disordered nanojunction. We have developed the dynamic non-local coherent phase approximation (DNLCPA), and the phase field matching theory (PFMT) methods (Ghader and Khater 2013), and fully integrate them to study the ballistic spin wave transport across such nanojunctions. The DNCLPA method yields a full description of the dynamics of the spin wave excitations localized on the nanojunction, and their corresponding life-times and local density of states. These are excitations propagating laterally in the nanojunction atomic planes with finite life-times, but their fields are localized along the direction normal to the nanojunction. Moreover, the calculations determine the reflection, transmission, and absorption spectra for the spin waves incident at any arbitrary angle from the iron leads onto the nanojunction. The PFMT-DNCLCPA calculated results vary with nanojunction thickness. In particular, the normal incidence transmission spectra present no absorption effects and resonance assisted maxima are identified, notably at low frequencies at microscopic and submicroscopic wavelengths, which shift to lower frequencies with increasing nanojunction thickness. The results render these systems interesting for potential applications in magnonic circuitry. Fig.1 Calculated DNLCPA-PFMT reflection and transmission spectra for spin waves at normal incidence from the iron leads onto the magnetic ..-Fe] [Gd]3ML [Fe-.. nanojunction, as a function of the spin wave energies in units J(Fe-Fe)S(Fe) of the iron exchange and its spin. Note the transmission assisted maxima. Fig.2 Calculated absorption spectra for obliquely incident spin waves at the nanojunction cited in Fig.1, due to its structural interfacial disorder. Acknowledgements: The authors acknowledge QNRF financial support for the NPRP 4-184-1-035 project. References - S. Choi, K.S. Lee, K.Y. Guslienko, S.K. Kim, Phys. Rev. Lett. 98, 087205 (2007) - D. Ghader and A. Khater, to be published (2013) - A. Khater, B. Bourahla, M. Abou Ghantous, R. Tigrine, R. Chadli, Eur. Phys. J. B: Cond. Matter 82, 53 (2011) - A. Khitun and K. L. Wang, Proceedings of the Third International Conference on Information Technology, New Generations ITNG, 747 (2006) - V.V. Kruglyak and R.J. Hicken, J. Magn. Magn. Mater. 306, 191 (2006)
-
-
-
The Gas Electron Multiplier For Charged Particle Detection
By Maya Abi AklThe Gas Electron Multiplier (GEM) has emerged as a promising tool for charged particle detection. It is being developed as a candidate detection system for muon particles for the future upgrade of Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC). It consists of a thin polymer foil, metal coated on each side and pierced by a high density of holes (see figure). The potential difference between the electrodes and the high electric field generated by the holes will further amplify the electrons released in the gas of the detector by the ionizing radiation or the charged particle crossing the detector. In this work, we will report on the results of the performance of the GEM prototype at the tests conducted at the CERN acceleration facilities using pion and muon beams. The main issues under study are efficiency, gain uniformity, and spatial resolution of the detector.
-
-
-
Dynamic Non-Local Coherent Phase Approximation (Dnlcpa) Model For Spin Wave Dynamics In Ultrathin Magnetic Fe-Gd Films Presenting Interfacial Structural Disorder
More LessIt is widely believed in the semiconductor community that the progress of the electronics based information technology is coming to an end (ITRS 2007), owing to fundamental electronic limitations. A promising alternative to electrons is the use of spin wave excitations. This has ushered a potentially powerful solution towards fabricating devices that use these excitations to transmit and process information (Khitun and Wang 2006). This new approach to information-processing technology, known as magnonics, is rapidly growing (Kruglyak and Hicken 2006), and key magnonic components such as spin wave guides, emitters, and ï¬lters are currently explored (Choi et al. 2007). The first working spin wave based logic device has been experimentally demonstrated by Kostylev et al (2005). In the present paper we develop and apply a model to analyze the spin dynamics for iron-gadolinium films of a few Gd(0001) atomic planes between two Fe(110) atomic planes. These ultrathin systems may be deposited layer by layer on a nonmagnetic substrate using techniques like dc-sputtering or pulsed laser deposition. They constitute prototypes for iron-gadolinium nanojunctions between iron leads in magnonics. In this system the Fe/Gd interfaces present structural disorder due to the mismatch between the Fe_bcc and Gd_hcp lattices. This engenders a quasi-infinite ensemble of Fe-Gd cluster configurations at the interface. In the absence of DFT or ab initio results for the magnetic Fe-Gd exchange, we have developed an integration based analytic approach to determine the spatial dependence of this exchange using available experimental data from the corresponding multilayer systems. A dynamic non-local CPA method is also developed to analyze the spin dynamics for the disordered systems. This DNLCPA introduces the idea of a scattering potential built up from the phase matching of the spin dynamics on structurally disordered Fe-Gd interface clusters with the spin dynamics of the virtual crystal. This method accounts properly for the quasi-infinite ensemble of interfacial structural configurations, and yields the configurationally averaged Green's function for the disordered system. The computations yield the spin wave eigenmodes, their energies, life-times, and local densities of states, for any given thickness of the ultrathin magnetic Fe-Gd film. Fig.1 DNLCPA calculated dispersion branches for the spin waves propagating in the ultrathin magnetic 1Fe-5Gd-1Fe film (7 atomic planes) presenting structural interfacial disorder. The normalized energies are in units of iron exchange and spin J(Fe-Fe)S(Fe). The curves are plotted as a function of the y-component of the wave-vector (inverse angstroms), the z-component = 0, in both figures. Fig.2 DNLCPA calculated life-times in picoseconds of the spin waves propagating in the ultrathin magnetic 1Fe-5Gd-1Fe film. Acknowledgements: The authors acknowledge QNRF financial support for the NPRP 4-184-1-035 project. References - S. Choi, K.S. Lee, K.Y. Guslienko, S.K. Kim, Phys. Rev. Lett. 98, 087205 (2007) - A. Khitun and K. L. Wang, Proceedings of the Third International Conference on Information Technology, New Generations ITNG, 747 (2006) - M.P. Kostylev, A.A. Serga, T. Schneider, B. Leven, B. Hillebrands, Appl. Phys. Lett. 87 153501 (2005) - V.V. Kruglyak and R.J. Hicken, J. Magn. Magn. Mater. 306, 191 (2006)
-
-
-
Quantum Imaging: Fundamentals And Promises
More LessQuantum imaging can be defined as an area of quantum optics that investigates the ultimate performance limits of optical imaging allowed by the quantum nature of light. Quantum Imaging techniques possess a high potential for improving the performance in recording, storage, and readout of optical images beyond the limits set by the standard quantum level of fluctuations known as the shot noise. This talk aims at giving an overview of the fundamentals of Quantum Imaging as well as its most important directions. We shall discuss generation of the spatially multimode squeezed states of light be means of a travelling-wave optical parametric amplifier. We shall demonstrate that this kind of light allows us to reduce the spatial fluctuations of the photocurrent in a properly chosen homodyne detection scheme with a highly efficient CCD camera. It will be shown that using the amplified quadrature of the light wave in a travelling-wave optical parametric amplifier, one can perform noiseless amplification of optical images. We shall provide recent experimental results demonstrating a single-shot noiseless image amplification by a pulsed optical parametric amplifier. One of important experimental achievements of Quantum Imaging, coined in the literature as quantum laser pointer, is a precise measurement of position and transverse displacement of a laser beam with resolution beyond the limit imposed by the shot noise. We shall describe briefly the idea of the experiment in which the transverse displacement of a laser beam was measured with resolution of the order of Angstrom. The problem of precise measurement of transverse displacement of a light beam brings us to a more general question about the quantum limits of optical resolution. The classical resolution criterion, derived in the nineteen century by Abbe and Rayleigh, states that the resolution of an optical instrument is limited by diffraction and is related to the wavelength of the light used in the imaging scheme. However, it was for a long time recognised in the literature that in some cases when one has some a priori information about the object, one can improve the resolution beyond the Rayleigh limit using the so-called super-resolution techniques. As a final topic in this talk, we shall discuss the quantum limits of optical super-resolution. In particular, we shall formulate the standard quantum limit of super-resolution and demonstrate how one can go beyond this limit using specially designed multimode squeezed light.
-
-
-
Computation Of Magnetizations Of …Co][Co_(1-C)Gd_C]_L ][Co]_L [Co_(1-C)Gd_C ]_L [Co… Nanojunctions Between Co Leads, And Of Their Spin Wave Transport Properties
More LessUsing the effective field theory (EFT) and mean field theory (MFT), we investigate the magnetic properties of the nanojunction system...Co][Co_(1-c)Gd_c]_l ][Co]_l [Co_(1-c)Gd_c ]_l [Co... between Co leads. The amorphous ][Co_(1-c)Gd_c]_l ] composite nanomaterial is modeled as a homogeneous random alloy of concentrations "c" on an hcp crystal lattice, and "l" is the number of its corresponding hcp (0001) atomic planes. In particular, EFT determines the appropriate exchange constants for Co and Gd by computing their single-atom spin correlations and magnetizations, in good agreement with experimental data in the ordered phase. The EFT results for the Co magnetization in the leads serve to seed the MFT calculations for the nanojunction from the interfaces inward. This combined analysis yields the sublattice magnetizations for Co and Gd sites, and compensation effects, on the individual nanojunction atomic planes, as a function of the alloy concentration, temperature and nanojunction thicknesses. We observe that the magnetic variables are different for the first few atomic planes near the nanojunction interfaces, but tend to limiting solutions in the core planes. The EFT and MFT calculated exchange constants and sublattice magnetizations are necessary elements for the computation of the spin dynamics for this nanojunction system, using a quasi-classical treatment over the spin precession amplitudes at temperatures distant from the critical temperatures of $Co_{1-c}Gd_c$ alloy. The full analysis in the virtual crystal approximation (VCA) over the magnetic ground state of the system yields both the spin waves (SWs) localized on the nanojunction, and also the ballistic scattering and transport across the nanojunction for SWs incident from the leads by applying the phase field matching theory (PFMT). The model results demonstrate the possibility of resonance assisted maxima for the SW transmission spectra owing to interactions between the incident SWs and the localized spin resonances on the nanojunction. The spectral transmission results for low frequency spin waves are of specific interest for experimentalist, because these lower frequency SWs correspond to submicroscopic wavelengths which are of present interest in experimental magnonics research and the VCA is increasingly valid as a model approximation for such frequencies. Fig.1: Calculated spin variables sigma_Co and sigma_Gd, in the first layer of [Co_(1-c}Gd_(c)]_2[Co]_2[Co_(1-c}Gd_(c)]_2 layer nanojunction as a function of kT in meV. Fig.2: The total reflection R and transmission T cross sections associated with the scattering at the nanojunction for the cobalt leads SW modes 1, 2, with the selected choices of incident angle (phi_z, phi_y). Acknowledgements: The authors acknowledge QNRF financial support for the NPRP 4-184-1-035 project.
-
-
-
In investigation into the optimal usage of social media networks in international collaborative supply chains
More LessSocial Media Networks (SMNs) are collaborative tools used in an increasing rate in many business and industrial environments. These are often used in parallel with dedicated Collaborative Technologies (CTs) which are specifically designed to handle dedicated tasks. Within this research project the specific area of supply chain management is the focus of investigation. Two case studies where CTs are already extensively employed have been conducted to evaluate the scope of SMN usage and to confirm the particular benefits provided and to identify limitations. The overall purpose being to provide guidelines on joint CT/SMN deployment in developing supply chains. The application of SMNs to supply chain operations is fundamental to addressing the emerging need for increased P2P (peer-to-peer) type communication. This type of communication is between individuals and is typified by increased relationship type interaction. This is in contrast to traditional B2B (business-to-business) communication which is typically conducted on a transactional basis especially where it is confined by the characteristics of dedicated CTs. SMNs can be applied in supply chain networks to deal with unexpected circumstances or problem solving, capture market demands and customer feedback, and in general provide a medium to react to unplanned events. Crucially, they provide a platform where the issues can be addressed on a P2P basis in the absence of confrontational, transactional type interactions. The case studies reported in this paper concern EU based companies, one being a major national aluminium profile extruder, the second being a bottling plant for a global soft drinks manufacturer. In both cases the application of CTs to their supply chains is well established. However whilst both companies could readily identify the strengths of their CT systems (information and data sharing, data storage and retrieval) they could also identify limitations. These limitations included the lack of real time interaction at a P2P level and, interestingly, the lack of a common language used between different CT systems in B2B communication. Overall, the comments of the case study companies was that the SMN provided valuable adjuncts to existing CT systems, but that the SMNs were not integrated with the CT systems. There was a strongly perceived need for a better understanding of the contrasting and complementary capabilities of CTs and SMNs so that in future fully integrated systems could be implemented. Future work in this area will focus on the development of guidelines and procedures for achieving such complementarity in international collaborative supply chains.
-
-
-
Middleware architecture for cloud-based services using software defined networking (SDN)
By Raj JainIn modern enterprise and Internet-based application environments, a separate middlebox infrastructure for providing application delivery services such as security (e.g., firewalls, intrusion detection), performance (e.g., SSL off loaders), and scaling (e.g., load balancers) is deployed. However, there is no explicit support for middleboxes in the original Internet design; forcing datacenter administrators to accommodate middleboxes through ad-hoc and error-prone network configuration techniques. Given their importance and yet the ad-hoc nature of their deployment, we plan to study application delivery (in general) in the context of cloud-based application deployment environments. To fully leverage these opportunities, ASPs need to deploy a globally distributed application-level routing infrastructure to intelligently route application traffic to the right instance. But, since such an infrastructure would be extremely hard to own and mange, it is best to design a shared solution where application-level routing could be provided as a service by a third party provider having a globally distributed presence, such as an ISP. Although these requirements seem separate, they can be converged into a single abstraction for supporting application delivery in the cloud context.
-
-
-
A proposed transportation tracking system for mega construction projects using passive RFID technology
More LessThe city of Doha has witnessed a rapid change in its demographics over the past decade. The city has been thoroughly modernized, a massive change in its inhabitants culture and behavior has occurred and the need to re-develop its infrastructure has arose creating multiple mega construction projects such as the New Doha International Airport, the Doha Metro Network, and the Lusail City. A mega-project such as the new airport in Doha requires 30,000 workers on average to be on site every day. This research tested the applicability of radio frequency identification (RFID) technology in tracking and monitoring construction workers during their trip from their housing to the construction site or between construction sites. The workers tracking system developed in this research utilized the passive RFID tracking technology due to its efficient tracking capabilities, low cost, and easy maintenance. The system will be designed to monitor construction workers ridership in a safe and non-intrusive way. It will use a combination of RFID, GPS (Global Positioning System), and GPRS (General Packet Radio Service) technologies. It will enable the workers to receive instant SMS alerts when bus is within 10 minutes of the designated pick up and drop off points reducing the time the workers spend on the street. This is very critical especially in a country like Qatar where temperatures can reach 50º degrees Celsius during summer time. The system will also notify management via SMS when the workers board and alight from the bus or enters/leaves the construction site. In addition, the system will display the real-time location of the bus and the workers inside the bus at any point in time. Each construction worker is issued one or more unique RFID card(s) to carry. The card will be embedded in the cloth for each worker. As the worker's tag is detected by the reader installed in the bus upon entering or leaving the bus, the time, date and location is logged and transmitted to a secure database. It will require no action on the part of drivers or workers, other than to carry the card and will deliver the required performance without impeding the normal loading and unloading process. To explore the technical feasibility of the proposed system, a set of tests were performed in the lab. These experiments showed that the RFID tags were effective and stable enough to be used for successfully tracking and monitoring construction workers using the bus.
-
-
-
A routing protocol for smart cities: RPL robustness under study
More LessSmart cities could be defined as developed urban areas, creating sustainable economic development and high quality of life by excelling in multiple key areas such as transportation, environment, economy, living, and government. This excellence could be reached through efficiency based on the intelligent management and integrated Information and Communication Technologies (ICT). Motivations. In the near future (2030), two thirds of the world population will reside on a city, thus increasing drastically demands on city infrastructures. As a result, urbanization is becoming a crucial issue. The Internet of Things (IoT) vision foresees billions of devices to form a worldwide network of interconnected objects including computers, mobile phones, RFID tags and wireless sensors. In this study, we focus on Wireless Sensor Networks (WSNs). The WSNs are a specific technology suitable to create Smart Cities. A distributed network of intelligent sensor nodes could measure numerous parameters and communicate them wirelessly and in real-time and makes possible a more efficient management of the city. For example, the pollution concentration in each street can be monitored, water leaks can be detected or noise maps of the city obtained. The number of applications with WSNs available for smart cities is only bounded by imagination: environmental care, sustainable development, healthcare, efficient traffic management, energy supply, water management, green buildings, etc. In short, WSN could improve the quality of life in a city. Scope. However, such urban applications often use multi-hop wireless networks with high density to obtain sufficient area coverage. As a result, they need networking stacks and routing protocols that can scale with network size and density, while remaining energy-efficient and lightweight. To this end, the IETF RoLL working group has designed the IPv6 Routing Protocol for Low-Power and Lossy Networks (RPL). This paper presents experimental results on the RPL protocol. The RPL properties in terms of delivery ratio, control packet overhead, dynamics and robustness are studied. The results are obtained by several experimentations conducted on two large WSNs testbeds composed of more than 100 sensor nodes each. In this real-life scenario (high density and convergecast traffic), several intrinsic characteristics of RPL are underlined: path length stability but reduced delivery ratio and important overhead (Fig. 1). However, the routing metrics, as defined by default, favor the creation of "hubs", aggregating most of 2-hops nodes (Fig. 2). To investigate the RPL robustness, we observe its behavior when facing a sudden death of several sensors and when several sensors are redeployed. RPL shows good abilities to maintain the routing process despite such events. However, the paper highlights that this ability can be reduced if only few critical nodes fail. To the best of our knowledge, it is the first study of RPL on such large platform.
-
-
-
Optimization Model For Modern Retail Industry
More LessIn a growing market with demanding consumers, the retail industry needs decision support tools that reflect emerging practices and integrate key decisions cutting across several of its departments (e.g. marketing and operations). The current tools may not be adequate to sufficiently tackle this kind of integration complexity in relation to pricing in order to satisfy the retailing experience of the customers. Of course, it has to be understood that the retailing experience can differ from one country to another and from one culture to another. Therefore, the off-the-shelve models may not be able to capture the behavior of customers in a particular region. This research aims at developing novel optimization mixed-integer linear/nonlinear formulations with extensive analytical and computational studies based on the experience of a large retailer in Qatar. The model addresses a product lines of substitutable items serving the same customer need but differ by secondary attributes. The research done in this project demonstrates that there is added value in identifying the shopping characteristics of the consumer base and well studying the consumer behavior in order to develop the appropriate retail analytics solution. The research is supported by a grant obtained through the NPRP project of Qatar Foundation.
-
-
-
My Method In Teaching Chemistry And Its Application Using Modern Technology
By Eman ShamsSince I studied chemical education chemistry in a very unique way at Oklahoma State University, Oklahoma , United States Of America. teaching chemistry became a hobby not a job for me. which motivated me to apply the teaching with educational technology to every chemistry aspect that I teach as I applied my unique expertise through the smart digital flipped classroom technique in teaching the different chemistry laboratories, on the blackboard and in the different chemistry tutorial classes that I taught offered by the chemistry Department, college of arts and sciences , Qatar University. Through building An educational web site with the theme chemistry flipped for teaching chemistry. The blended learning chemistry web site provides students with explanatory material that is augmented by audio visual simulations with technology immersed education. The general idea that students work at their own pace, receiving lectures at home via online videos or podcasts which I record and post for them so they got prepared before coming to the class. the students are able to use class time to practice what they've learned with traditional schoolwork—but with my freed up for additional one-on-one time. Students can review lessons anytime, anywhere on their personal computers and smart phones, reserving class time for in-depth discussions or doing the actual experiment and most importantly they know and understand what they are doing. Through unlocking knowledge and empowering minds. The video lecture is often seen as the key ingredient in the flipped approach. My Talk will be on the use of educational technology in teaching chemistry as my chem Demos go behind the magic as I used new techniques for helping students visualize the concept which helped them understand the topic more. I converted the writing discussion into a conversation into the cloud system. The web site includes online interactive pre/post-classroom activities assessment pages, live podcast capture for the experiments done by the students, post-classroom activities, dynamic quizzes and practice exams pages, honor pages for recognizing the hard working students, in class lecture live podcasting, linked experiments and many more. Probabilistic system analysis is used for keeping track of the students' progress, their access and their learn and I used statistic to relate the students results before and after the use of my blended learning audiovisual simulation flipped classroom. During the study the students showed an extraordinary passion to chemistry , they study it on iTunes, YouTube, on Facebook, on Twitter, they learned it very well, chemistry with them everywhere even I their free time. demonstrate the advantages associated with using web based learning with the flipped chemistry teaching methods provide support for students in a large lecture and laboratory classes. The two biggest changes resulting from the method are the manner in which content is delivered (outside of class) and the way students spend time in the classroom (putting principles into practice). Feedback from students has conveyed that the style is more dynamic and motivational than traditional, passive teaching. Which help keep open courseware going and growing.
-
-
-
High order spectral symplectic methods for solving PDEs on GPU
More LessEfficient and accurate numerical solving of partial differential equations (PDE) is essential for many problems in science and engineering. In this paper we discuss spectral symplectic methods with different numerical accuracy on example of Nonlinear Schrodinger Equation (NLSE), which can be taken as a model for versatile kinds of conservative systems. First, second and fourth order approximation have been observed and reviewed considering execution speed vs. accuracy trade off. In order to utilize the possibility of modern hardware, the numerical algorithms are implemented both on CPU and GPU. Results are compared in sense of execution speed, single/double precision, data transfer and hardware specifications.
-