Qatar Foundation Annual Research Forum Volume 2013 Issue 1
- تاريخ المؤتمر: 24-25 Nov 2013
- الموقع: Qatar National Convention Center (QNCC), Doha, Qatar
- رقم المجلد: 2013
- المنشور: ٢٠ نوفمبر ٢٠١٣
401 - 420 of 541 نتائج
-
-
Measurement Of Refractive Indices Of Ternary Mixtures Using Digital Interferometry And Multi-Wavelength Abbemat Refractometer
المؤلفون: Mohammed YahyaAbstract Knowing the liquid mixtures properties are significantly important in scientific experimentation and technological development. Thermal diffusion in mixtures plays a crucial role in both nature and technology. The temperature and concentration coefficients of refractive indices so called the Contrast factors contribute to study in various fields including, crude oil experiments (SCCO) and the distribution of crude oil components. The Abbemat Refractometer and Mach-Zehnder Interferometer technique has been proven a precise, highly accurate, and non-intrusive method for measuring the refractive index of a transparent medium. Refractive indices for three ternary mixtures containing three hydrocarbon compositions and their pure components of Tetrtahydronaphtalenene (THN), Isobutylbenzen (IBB), and Dodecane (C12), used mainly in gasoline, were experimentally measured using both the Mach-Zehnder Interferometer and a multi-wavelength Abbemat refractometer. Temperature and concentration coefficients of refractive indices, or contrast factors, as well as their individual correlation to calculate refractive indices have been presented in this research. The experimental measurements were correlated with a wide range of temperatures and wavelengths over a broad range of compositions. The experimental measurements of the refractive indices were compared with those estimated by applying several mixing rules: Lorentz-Lorenz, Gladstone-Dale, Arago-Biot, Eykman, Wiener, Newton, and Oster predictive equations. The experimental values of refractive indices are in substantial agreement with the values obtained by L-L, G-D, and A-B equations, and excepting values obtained by Oster and Newton equations. The temperature, concentration and wavelength dependence of refractive index in mixtures agrees with published data. A comparison with available literature and mixing rules shows that new correlations can predict the experimental data with deviations of less than 0.001.
-
-
-
Top ten hurdles on the race towards the Internet of Things
المؤلفون: Aref MeddebWhile the Internet is continuously growing and unavoidable moving towards an ubiquitous internetworking technology, ranging from casual data exchange, home entertainment, security, military, healthcare, transportation, and business applications, to the Internet of Things (IoT) where anything can communicate anywhere, anytime, there is an increasing and urgent need to define global standards for future Internet architectures as well as service definitions. In the Future, the Internet will perhaps become the most exciting entertainment mean. It is very likely that we would be able to make 3D conferencing at home, using holographic projection, spatial sound and haptic transmission. Further, we may no longer use traditional devices such as laptop and desktop computers and cell phones. Monitors and speakers will be completely different ranging from 3D-glasses and headsets to 3D-contact lenses, and in-ear wireless earphones. Keyboard and mouse will disappear as well and will be replaced by biometrics such as voice, iris and finger print recognition, and movement detection. Many projects are being proposed to support and promote the Internet of Things worldwide, ranging from Initiatives (http://www.iot-i.eu/public), Alliances (http://www.ipso-alliance.org/), Forums (http://www.iot-forum.eu/), Consortiums (http://iofthings.org/), Architectures (http://www.iot-a.eu/), Research Clusters (http://www.internet-of-things-research.eu/), and the list goes on. There is seemingly a race towards the leadership on the Internet of things, to the extent that often, driven by "time-to-market" constraints and business requirements, most proposals made so far fail to consider all facets required to deliver the expected services, and often lead to ambiguous and even contradictory definitions of what the Internet of Things is meant to be. For example, some proposals assimilate IoT to Web 3.0, while others claim it is primarily based on RFID and systems alike. Between web semantics and cognitive radio communications, there is a huge gap that needs to be filled out by adequate communication, security, and application software and hardware. Unfortunately, so far, there is no general consensus on the most suitable underlying technology and applications that will help the Internet of Things become a reality. In this presentation, we describe ten of the most challenging hurdles that IoT actors need to face in order to reach their goals. Among these hurdles, we identify the top-ten issues as 1) how to safely migrate from IPv4 to IPv6 given the proliferation of IPv4 devices, 2) Regulation, Pricing and Neutrality of the Future Internet, 3) OAM tools required for accounting and monitoring Future Internet Services, 4) Future Quality of Service definitions and mechanisms, 5) Reliability of Future Internet Services, 6) Future Security Challenges, 7) Scalability of the Internet of Things with its billions of interworked devices, 8) Future Access (local loop) technologies, 9) Future Transport (long distance) telecommunication technologies, and 10) Future End-user Devices and Applications. We aim to briefly explain and highlight the impact of each one of these issues on the service delivery in the context of the Internet of things. We also intend to describe some existing and promising solutions available so far.
-
-
-
The HAIL platform for big health data
المؤلفون: Syed Sibte Raza Abidi, Ali Daniyal, Ashraf Abusharekh and Samina AbidiBig data analytics in health is an emerging area due to the urgent need to derive actionable intelligence from the large volumes of healthcare data to efficiently manage the healthcare system and to improve health outcomes. In this paper we present a ‘big’ healthcare data analytics platform—termed as Healthcare Analytics for Intelligence and Learning (HAIL)—that is an end-to-end healthcare data analytics solution to derive data-driven actionable intelligence and situational awareness to inform and transform health decision-making, systems management and policy development. The innovative aspects of HAIL are: (a) the integration of data-driven and knowledge-driven analytics approaches, (b) a sand-box environment for healthcare analysts to develop and test health policy/process models by exploiting a range of data preparation, analytical and visualization methods, (c) the incorporation of specialized healthcare data standards, terminologies and concept maps to support data analytics, and (d) text analytics to analyze unstructured healthcare data. The architecture of HAIL comprises the following four main modules (fig 1): (A) Health Data Integration module that entails a semantics-based metadata manager to synthesize health data originating from a range of healthcare institutions to formulate a rich contextualized data resource. The data integration is achieved through ETL workflows designed by health analysts and researchers, (B) Health Analytics Module provides a range of healthcare analytics capabilities including, (i) Exploratory Analytics using data mining to perform data clustering, classification and association tasks, (ii) Predictive Analytics to predict future trends/outcomes derived from past observations of the healthcare processes, (iii) Text Analytics to analyze unstructured texts (such as clinical notes, discharge summaries, referral notes, clinical guidelines, etc.), (iv) Simulation-based Analytics to simulate what-if questions based on simulation models, (v) Workflow analytics to interact with modeled clinical workflows to understand the affects of various confounding factors, (vi) Semantic Analytics to infer contextualized relationships, anomalies and deviations through reasoning over a semantic health data model, and (vii) Informational Analytics to present summaries, aggregations, charts and reports, (C) Data Visualization Module offers a range of interactive data visualizations, such as geospatial visualizations, causal networks, 2D and 3D graphs, pattern clusters and interactive visualizations to explore high dimensional data, (D) Data Analytics Workbench is an interactive workspace to enable data health analysts to specify and set-up their analytics process in terms of data preparation, selection and set-up of analytical methods and the selection of visualization methods. Using the workbench analysts can design sophisticated data analytics workflows/models using a range of data integration, analytical and visualization methods. The HAIL platform is available via a web portal and the desktop application, and it is deployed on a cloud infrastructure. HAIL can be connected with existing health data sources to provide front-end data analytics. Fig 2 shows the technical architecture. The HAIL platform has been applied to analyze a real-life clinical healthcare situations using actual data from the provincial clinical data warehouse. We will present two case studies of the use of HAIL. In conclusion, the HAIL platform addresses a critical need for healthcare analytics to impact health decision-making, systems management and policy development.
-
-
-
Utility of a comprehensive Personal Health Record (PHR) strategy
المؤلفون: Stephanie RizkA fundamental component of improving health care quality is to include the patient as a participant in achieving their health goals and making decisions about their care. While many may see clinical information systems as a way to improve communication and track health information for clinicians, the role played by personal health records (PHRs) should not be overlooked in developing a modernized Integrated Health Management system. This presentation will outline the positives and negatives of sharing clinical data between providers and their patients, outline a few PHR functionalities that can support care coordination and health outcomes, and describe the components of a national strategy to support the development of PHRs. A PHR, also referred to as a patient portal, is a system that allows an individual to keep track of their health information. While access to information is important for patients, doctors may feel uncomfortable with the idea of sharing information with their patients in this manner. Alternatively, patients may worry about sharing information they would like to keep private. These concerns can often be controlled with the right policies to prevent unexpected use and sharing of the information such as: * providing data segmentation functionalities that allow patients to share only selected data with certain providers, * sharing items such as medications and test results, but not sharing other information that providers might like to keep private such as "notes". Many PHRs draw information out of an existing clinical information system maintained by the physician. If a patient sees providers over multiple systems, the information can still be difficult to track. The most effective PHR allows data from any system to be included. To increase usefulness, a PHR may also store information that the patient adds themselves and provide tools such as medication reminders. Ideally, if a patient is seeing different doctors, those doctors will have information from everyone on the care team, but this does not always happen. If the patient has access to the information, they can share it with a range of providers- improving care coordination at the point of care, where it matters most. A PHR can be useful to all members of a population, and should be considered when planning a clinical information system implementation strategy. However, because different hospitals may provide different systems, it is helpful to have a government-wide strategy. For example, in the United States the government has developed a lightweight standard technical infrastructure based on secure messaging which is being used to populate PHRs. The government also supports development of governance to outline standard policies that ensure all stakeholders can trust the information in the system. A strategy to coordinate the use of PHRs is envisioned to have potential in a number of areas, from improvements in individual health to leveraging these systems to improve research on quality and health behaviors.
-
-
-
Service-based Internet Infrastructure for the Qatar World Cup 2022
المؤلفون: Muhammad AnanThe shortcomings of today's Internet and the high demand for complex and sophisticated applications and services drive a very interesting and novel research area called Future Internet. The area of Future Internet research focuses on developing a new network with similar magnitude as today's Internet but with more demanding and complex design goals and specifications. It strives to solve the issues identified in today's Internet by capitalizing on the advantages of emerging new technologies in the area of computer networking such as Software Defined Networking (SDN), autonomic computing, and cloud computing. SDN represents an extraordinary opportunity to rethink computer networks, enabling the design and deployment of a future Internet. Utilizing these technologies leads to a significant progress in the development of an enhanced, secure, reliable, and scalable Internet. In 2022, Qatar will host the world cup where more than 5 Million people from all over the world are expected to attend. This event is expected to put the host country, Qatar, under massive pressure and huge challenges in terms of providing high quality Internet service especially with the increasing demand for emerging applications such as video streaming service over mobile devices. It is vital to evaluate and deploy state-of-the-art technologies based on a promising future Internet infrastructure in order to provide high-quality Internet services unique to this event and to the continuing rapid growth of the country. The main goal in this paper is to present a new network design with similar magnitude as today's Internet but with more demanding and complex design goals and specifications defined by the knowledge and experience gathered from four decades of using the current Internet. This research focuses on the development of a complete system, designed with the new requirements of the Future Internet in mind, and aims to provide, monitor and enhance the increasing demand for popular video streaming service. The testing environment was built using the Global Environment for Network Innovations (GENI) testbed (see Figure 1). GENI, a National Science Foundation (NSF) funded research and development effort, aims to build a collaborative and exploratory network experimentation platform for the design, implementation and evaluation of future networks. Because GENI is meant to enable experimentation with large-scale distributed systems, experiments will in general always contain multiple communicating software entities. Large experiments may well contain tens of thousands of such communicating entities, spread across continental or transcontinental distances. The conducted experiments illustrate how such a system can function under unstable and changing network conditions, dynamically learn its environment, recognize potential service degradation problems, and react to these challenges in an autonomic manner without the need for human intervention.
-
-
-
Advanced millimeter-wave concurrent 44/60GHz phased array for communications and sensing
المؤلفون: Jaeyoung Lee, Cuong Huynh, Juseok Bae, Donghyun Lee and Cam NguyenWireless communications and sensing have become an indispensable part of our daily lives from communications, public service and safety, consumer, industry, sports, gaming and entertainment, asset and inventory management, banking to government and military operations. As communications and sensing are poised to address challenging problems to make our lives even better under environments that can potentially disrupt them, like highly populated urban areas, crowded surroundings, or moving platforms, considerable difficulties emerge that greatly complicate communications and sensing. Significantly improved communication and sensing technologies become absolutely essential to address these challenges. Phased arrays allow RF beams carrying the communication or sensing information to be rapidly steered or intercepted from different angles across areas with particular amplitude profiles electronically, enabling swift single- or multi-point communications or sensing over large areas or across many targets while avoiding potentially disrupting obstacles. They are particularly attractive for creating robust communication links for both line-of-sight (LOS) and non-line-of-sight (NLOS) due to their high directivity and scanning ability. In this talk, we will present a novel millimeter-wave dual-band 44/60 GHz phased-array frontend capable of two-dimensional scanning with orthogonal polarizations. This phased array particularly resolves the "RF signal leakage and isolation dilemma" encountered in the existing phased-array systems. It integrates "electrically" the phased-array functions in two separate millimeter-wave bands into a single phased array operating concurrently in dual-band. These unique features, not achievable with existing millimeter-wave phased arrays, will push the phased-array system performance to a next level, while reducing size and cost, and enhance the capability and applications for wireless communications and sensing, particularly when multifunction, multi-operation, multi-mission over complex environments with miniature systems become essential. This phased array enables vast communication and sensing applications, either communication or sensing or both simultaneously - for instance, concurrent Earth-satellite/inter-satellite communications, high-data-rate WPANs and HDMI, and accurate, high-resolution, enhanced-coverage multi-target sensing.
-
-
-
Baseband DSP for dirty RF front-ends: Theory, algorithms, and test bed
المؤلفون: Ozgur Ozdemir, Ridha Hamila and Naofal Al-DhahirOrthogonal Frequency Division Multiplexing (OFDM) is widely adopted as the transmission scheme of choice for almost all broadband wireless standards (including WLAN, WiMAX, LTE, DVB, etc.) due to its multipath resilience and practical implementation complexity. The direct-conversion radio frequency (RF) front-end architecture , where the down-conversion to baseband is accomplished in a single stage, requires fewer analog components compared to the super-heterodyne architecture where the down-conversion is accomplished with one or more intermediate frequency (IF) stages. Fewer analog components result in reduced power consumption and cost. However, direct-conversion OFDM based broadband wireless transceivers suffer from several performance-limiting RF/analog impairments including I/Q imbalance and phase noise (PHN) [1]. I/Q imbalance refers to the amplitude and phase mismatches between the in-phase (I) and quadrature (Q) branches at the transmit and receive sides. In an OFDM system with I/Q imbalance, the transmitted signal at a particular subcarrier is corrupted by interference from the image subcarrier. To compensate for the effect of I/Q imbalance, the received signals from each subcarrier and its image subcarrier are processed jointly. PHN refers to the random unknown phase difference between the carrier signal and the local oscillator. In an OFDM transceiver, PHN rotates the signal constellation on each sub-carrier and causes inter-carrier interference between the sub-carriers resulting in significant performance degradation. In this project, we designed novel algorithms for the efficient estimation and compensation of RF impairments including I/Q imbalance and PHN for beamforming OFDM systems such as 4G LTE cellular systems and 802.11 wireless local area networks (WLAN) [2]. Conventional OFDM transceivers ignore I/Q imbalance effects and each OFDM subcarrier is processed separately which causes an irreducible error floor in the bit error rate performance. However, in our proposed method, each OFDM subcarrier and its image subcarrier are processed jointly to mitigate the effects of I/Q imbalance. This novel method is capable of eliminating the error floor and obtaining performance close to the ideal case where no I/Q imbalance exists. Furthermore, we have developed an experimental OFDM testbed to implement the proposed algorithms. Our testbed uses the Universal Software Radio Peripheral (USRP) N210 RF frontends and it is based on packet-based OFDM similar to the IEEE 802.11.a WLAN standard. The baseband processing is done in MATLAB where the MATLAB driver for USRP is used for stream processing of the transmitted and received signals. The measured experimental results demonstrate that the proposed algorithms improve the performance significantly at low implementation complexity. [1] B. Razavi, RF Microelectronics. Englewood Cliff, NJ: Prentice-Hall, 1998. [2] O. Ozdemir, R. Hamila, and N. Al-Dhahir, “I/Q imbalance in multiple beamforming OFDM transceivers: SINR analysis and digital baseband compensation,” IEEE Trans. on Comunications, vol. 61, no. 5, pp. 1914-1925, May 2013. Acknowledgement: This work is supported by Qatar National Research Fund (QNRF), Grant NPRP 09-062-2-035.
-
-
-
MobiBots: Towards detecting distributed mobile botnets
المؤلفون: Abderrahmen Mtibaa, Hussein Alnuweiri and Khaled A HarrasIt is widely known that the state of a patient's coronary heart disease can be better assessed using intravascular ultrasound (IVUS) than with more conventional angiography. Recent work has shown that segmentation and 3D reconstruction of IVUS pull-back sequence images can be used for computational fluid dynamic simulation of blood flow through the coronary arteries. This map of shear stress in the blood vessel walls can be used to predict susceptibility of a region of the arteries to future arteriosclerosis and disease. Manual segmentation of images is time consuming as well as cost prohibitive for routine diagnostic use. Current segmentation algorithms do not achieve a high enough accuracy because of the presence of speckle due to blood flow, relatively low resolution of images and presence of various artifacts including guide-wires, stents, vessel branches, and some other growth or inflammations. On the other hand, the image may be induced with additional blur due to movement distortions, as well as resolution-related mixing of closely resembling pixels thus forming a type of out-of-focus blur.. Robust automated segmentation achieving high accuracy of 95% or above has been elusive despite of work by a large community of researchers in the machine vision field. We propose a comprehensive approach where a multitude of algorithms are applied simultaneously to the segmentation problem. In an initial step, pattern recognition methods are used to detect and localize artifacts. We have achieved a high accuracy of 95% or better in detecting frames with stents and location of guide-wire in a large data-set consisting of 15 pull-back sequences with about 1000 image frames each. Our algorithms for lumen segmentation using spatio-temporal texture detection and active contour models have achieved accuracies approaching 70% in the same data-set which is on the high-side of reported accuracies in the literature. Further work is required to combine these methods to increase segmentation accuracy. One approach we are investigating is to combine algorithms using a meta-algorithmic approach. Each segmentation algorithm computes along with the segmentation a measure of confidence in the segmentation which can be biased on prior information about the presence of artifacts. A meta-algorithm then runs a library of algorithms on a sub-sequence of images to be segmented and chooses the segmentation based on computed confidence measures. Machine learning and testing is performed on a large data base. This research is in collaboration with Brigham and Women Hospital in Boston that provides well over 45,000 frames of data for the study.
-
-
-
Completely automated robust segmentation of intravascular ultrasound images
المؤلفون: Chi Hau ChenIt is widely known that the state of a patient's coronary heart disease can be better assessed using intravascular ultrasound (IVUS) than with more conventional angiography. Recent work has shown that segmentation and 3D reconstruction of IVUS pull-back sequence images can be used for computational fluid dynamic simulation of blood flow through the coronary arteries. This map of shear stress in the blood vessel walls can be used to predict susceptibility of a region of the arteries to future arteriosclerosis and disease. Manual segmentation of images is time consuming as well as cost prohibitive for routine diagnostic use. Current segmentation algorithms do not achieve a high enough accuracy because of the presence of speckle due to blood flow, relatively low resolution of images and presence of various artifacts including guide-wires, stents, vessel branches, and some other growth or inflammations. On the other hand, the image may be induced with additional blur due to movement distortions, as well as resolution-related mixing of closely resembling pixels thus forming a type of out-of-focus blur.. Robust automated segmentation achieving high accuracy of 95% or above has been elusive despite of work by a large community of researchers in the machine vision field. We propose a comprehensive approach where a multitude of algorithms are applied simultaneously to the segmentation problem. In an initial step, pattern recognition methods are used to detect and localize artifacts. We have achieved a high accuracy of 95% or better in detecting frames with stents and location of guide-wire in a large data-set consisting of 15 pull-back sequences with about 1000 image frames each. Our algorithms for lumen segmentation using spatio-temporal texture detection and active contour models have achieved accuracies approaching 70% in the same data-set which is on the high-side of reported accuracies in the literature. Further work is required to combine these methods to increase segmentation accuracy. One approach we are investigating is to combine algorithms using a meta-algorithmic approach. Each segmentation algorithm computes along with the segmentation a measure of confidence in the segmentation which can be biased on prior information about the presence of artifacts. A meta-algorithm then runs a library of algorithms on a sub-sequence of images to be segmented and chooses the segmentation based on computed confidence measures. Machine learning and testing is performed on a large data base. This research is in collaboration with Brigham and Women Hospital in Boston that provides well over 45,000 frames of data for the study.
-
-
-
OSCAR: An incentive-based collaborative bandwidth aggregation system
المؤلفون: Khaled HarrasThe explosive demand for mobile data, predicted to reach a 25 to 50 times increase by 2015, along with expensive data roaming charges, and user expectation to remain continuously connected, are all creating novel challenges for service providers and researchers. A potential approach for solving these problems, is exploiting all communication interfaces available on modern mobile devices in both solitary and collaborative forms. In the solitary form, the goal is to exploit any direct Internet connectivity on any of the available interfaces by distributing application data across them in order to achieve higher throughput, minimize energy consumption, and/or minimize cost. In the collaborative form, the goal is to enable and incentivize mobile devices to utilize their neighbors' under-utilized bandwidth in addition to their own direct Internet connections. Despite today's mobile devices being equipped with multiple interfaces, there has been a high deployment barrier for adopting collaborative multi-interface solutions. In addition, these solutions focus on bandwidth maximization without paying sufficient attention to energy efficiency and effective incentive systems. We present OSCAR, a multi-objective, incentive-based, collaborative, and deployable bandwidth aggregation system that fulfills the following requirements: (1) It is easily deployable by not requiring changes to legacy servers, applications, or network infrastructure (i.e. adding new hardware like proxies and routers). (2) It seamlessly exploits available network interfaces in solitary and collaborative forms. (3) It adapts to real-time Internet characteristics and varying system parameters to achieve efficient utilization of these interfaces. (4) It is equipped with an incentive system that encourages users to share their bandwidth with others. (5) It adopts an optimal multi-objective and multi-modal scheduler that maximizes the overall system throughput, while minimizing cost and energy consumption based on user requirements and system status. (6) It leverages incremental system adoption and deployment to further enhance performance gains. A typical scenario for OSCAR is shown in Figure 1. Our contributions are summarized as follows: (1) Designing the OSCAR system architecture that fulfills the requirements above. (2) Formulating OSCAR's data scheduler as an optimal multi-objective, multi-modal scheduler that takes user requirements, device context information, and application requirements into consideration, while distributing application data across multiple local and neighboring devices interfaces. (3) Developing the OSCAR communication protocol that implements our proposed credit-based incentive system, and enables secure communication between the collaborating nodes and the OSCAR enabled servers. We evaluate OSCAR via implementation on Linux, as well as via simulation, and compare the results to the optimal achievable throughput, cost, and energy consumption rates. The OSCAR system, including its communication protocol, is implemented over the Click Modular Router framework in order to demonstrate its ease of deployment. Our results, which are verified via NS2 simulations, show that with no changes to current Internet architectures, OSCAR reaches the throughput upper-bound. It also provides up to 150% enhancement in throughput compared to current Operating Systems without changes to legacy servers. Our results also demonstrate OSCAR's ability to maintain cost and the energy consumption levels in the user-defined acceptable ranges.
-
-
-
Towards image-guided, minimally-invasive robotic surgery for partial nephrectomy
Introduction: Surgery remains one of the primary methods for terminating cancerous tumours. Minimally-invasive robotic surgery, in particular, provides several benefits, such as filtering of hand tremor, offering more complex and flexible manipulation capabilities that lead to increased dexterity and higher precisions, and more comfortable seating for the surgeon. All in turn lead to reduced blood loss, lower infection and complication rates, less post-operative pain, shorter hospital stays, and better overall surgical outcomes. Pre-operative 3D medical imaging modalities, mainly magnetic resonance imaging (MRI) and computed tomography (CT) are used for surgical planning, in which tumour excision margins are identified for maximal sparing of healthy tissue. However, transferring such plans from the pre-operative frame-of-reference to the dynamic intra-operative scene remains a necessary yet largely unsolved problem. We summarize our team's progress towards addressing this problem focusing on partial nephrectomy (RAPN) performed with a daVinci surgical robot. Method: We perform pre-operative 3D image segmentation of the tumour and surrounding healthy tissue using interactive random walker image segmentation, which provides an uncertainty-encoding segmentation used to construct a 3D model of the segmented patient anatomy. We reconstruct the 3D geometry of the surgical scene from the stereo endoscopic video, regularized by the patient-specific shape prior. We process the endoscopic images to detect tissue boundaries and other features. Then we align, first via rigid then via deformable registration, the pre-operative segmentation to the 3D reconstructed scene and the endoscopic image features. Finally, we present to the surgeon an augmented reality view showing an overlay of the tumour resection targets on top of the endoscopic view, in a way that depicts uncertainty in localizing the tumour boundary. Material: We collected pre-operative and intra-operative patient data in the context of RAPN including stereo endoscopic video at full HD 1080i (da Vinci S HD Surgical System), CT images (Siemens CT Sensation 16 and 64 slices), MR images (Siemens MRI Avanto 1.5T), and US images (Ultrasonix SonixTablet with a flexible laparoscopic linear probe). We also acquired CT images and stereo video from in-silico phantoms and ex-vivo lamb kidneys with artificial tumours for test and validation purposes. Results and Discussion: We successfully developed a novel proof-of-concept framework for prior and uncertainty encoded augmented reality system that fuses pre-operative patient specific information into the intra-operative surgical scene. Preliminary studies and initial surgeons' feedback on the developed augmented reality system are encouraging. Our future work will focus on investigating the use of intra-operative US data in our system to leverage all imaging modalities available during surgeries. Before a full system integration of these components, improving accuracy and speed of aforementioned algorithms, and the intuitiveness of the augmented reality visualization, remain active research projects for our team.
-
-
-
Summarizing machine translation text: An English-Arabic case study
المؤلفون: Houda Bouamor, Behrang Mohit and Kemal OflazerMachine Translation (MT) which has been championed as an effective technology for knowledge transfer from English to languages with less digital content. An example of such efforts is the automatic translation of English Wikipedia to languages with smaller collections, such as Arabic. However, MT quality is still far from ideal for many of the languages and text genres. While translating a document, many sentences are poorly translated which can provide an incorrect text, and confuse the reader. Moreover, some of these sentences are not as informative and could be summarized to make a more cohesive document. Thus, for tasks in which complete translation is not mandatory, MT can be effective if the system can provide an informative subset of the content with higher translation quality. For this scenario, text summarization can provide effective support for MT by keeping only the most important and informative parts of a given document to translate. In this work, we demonstrate a framework of MT and text summarization which replaces the baseline translation with a proper summary that has higher translation quality than the full translation. For this, we combine the state of the art English summarization system and a novel framework for prediction of MT quality without references. Our framework is composed of the following major components: (a) a standard machine translation system, (b) a reference-free MT quality estimation system, (c) an MT-aware summarization system, and (d) an English-Arabic sentence matcher. More specifically, our English-Arabic system reads in an English document along with its baseline Arabic translation and outputs, as a summary, a subset of the Arabic sentences based on their informativeness and their translation quality. We demonstrate the utility of our system by evaluating it with respect to its translation and summarization quality and demonstrate that we can balance between improving MT quality and maintaining a decent summarization quality. For summarization, we conduct both reference-based and reference-free evaluations and observe a performance in the range of the state of the art system. Moreover, the translation quality of the summaries shows an important improvement against the baseline translation of the entire documents. This MT-aware summarization approach can be applied to translation of texts such as Wikipedia articles. For such domain-rich articles, there is a large variation of translation quality across different sections. An intelligent reduction of the translation tasks results in improved final outcome. Finally, the framework is mostly language independent and can be easily customized for different target languages and domains.
-
-
-
Distributed algorithms in wireless sensor networks: An approach for applying binary consensus in large testbeds
المؤلفون: Noor Al-NakhalaOur work represents a new starting point for a wireless sensor network implementation of a cooperative algorithm called the binary consensus algorithm. Binary consensus is used to allow a collection of distributed entities to reach consensus regarding the answer to a binary question and the final decision is based on the majority opinion. Binary consensus can play a basic role in increasing the accuracy of detecting event occurrence. Existing work on binary consensus focuses on simulation of the algorithm in a purely theoretical sense. We have adapted the binary consensus algorithm for use in wireless sensor networks. This is achieved by specifying how motes find partners to update state with as well as by adding a heuristic for individual motes to determine convergence. In traditional binary consensus, individual nodes do not have a stop condition, meaning nodes continue to transmit even after convergence has occurred. In WSNs however, this is unacceptable since it will consume power. So in order to save power sensor motes should stop the communication when the whole network converges. For that reason we have designed a tunable heuristic value N that will allow motes to estimate when convergence has occurred. We have evaluated our algorithm successfully in hardware using 139 IRIS sensor motes and have further supported our results using the TOSSIM simulator. We were able to minimize the convergence time reaching optimal results. The results also show that as the network gets denser the convergence time will lower. In addition, convergence speed depends on the number of motes present in the network. During the experiments none of the motes failed and our algorithm converged correctly. The hardware as well as the simulation results demonstrate that the convergence speed depends on the topology type, the number of motes present in the network, and the distribution of the initial states.
-
-
-
Silicon radio-frequency integrated-circuits for advanced wireless communications and sensing
المؤلفون: Cam NguyenSilicon-based Radio-Frequency Integrated Circuit (RFIC) hardware is the backbone of advanced wireless communication and sensing systems, enabling low-cost, small-size, and high-performance single-chip solution. Advanced RF wireless systems and in turn, silicon RFICs, are relevant not only to commercial and military applications, but also to national infrastructures. This importance is even more pronounced as the development of civilian technologies becomes increasingly important to the national economic growth. New applications utilizing silicon RFIC technologies continue to emerge - spanning across spectrums - from ultra-wideband to millimeter-wave and submillimeter-wave ultra-high-capacity wireless communications; from sensing for airport security and inventory for gas and oil; and from detection and inspection of buried underground oil and gas pipes to wireless power transmission and data communications for smart wells. In this talk, we will present some of our recent developments of silicon RFICs for advanced wireless communications and sensing.
-
-
-
PLATE: Problem-based learning authoring and transformation environment
المؤلفون: Mohammed SamakaThe Problem-based Learning Authoring and Transformation Environment (PLATE) project seeks to improve student learning using innovative approaches to problem-based learning (PBL) in a cost-effective, flexible, interoperable, and reusable manner. Traditional subject-based learning that focuses on passively learning facts and reciting them out of context is no longer sufficient to prepare potential engineers and all students to be effective. Within the last two decades, the problem-based learning approach to education has started to make inroads into engineering and science education. This PBL educational approach comprises an authentic, ill-structured problem with multiple possible routes to multiple possible solutions. The PBL approach offers unscripted opportunities for students to identify personal knowledge gaps as starting points for individual learning. Additionally, it requires a facilitator (not a traditional teacher) who guides learning by asking probing questions that model expert cognitive reasoning and problem solving strategies. Bringing real-life context and technologies into the curriculum through a problem based learning approach encourages students to become independent workers, critical thinkers, problem solver, lifelong learners, and team workers. A systematical approach to support online PBL is the use of a pedagogy-generic e-learning platform such as IMS Learning Design (IMS-LD 2003), which is an e-learning technical standard useful to script a wide range of pedagogical strategies as formal models. This PLATE project uses the IMS-DL strategies. It seeks to research and develop a process modeling approach together with software tools to support the development and delivery of face-to-face, online, and hybrid PBL courses or lessons in a cost-effective, flexible, interoperable, and reusable manner. The research team seeks to prove that the PLATE authoring system optimizes learning and that the PLATE system improves learning in PBL activities. For this poster presentation, the research team will demonstrate the progress it has made within the first year of research. This includes the development of a prototype PBL scripting language to represent a wide range of PBL models, the creation of transformation functions to map PBL models represented in the PBL scripting language into the executable models represented in IMS-LD, and the architecture of the PLATE authoring tool. The research team plans to illustrate that the research and development of a PBL scripting language and the associated authoring and execution environment can provide a significant thrust toward further research of PBL by using meta-analysis, designing effective PBL models, and extending or improving a PBL scripting language. The team believes that PBL researchers can use the PBL scripting language and authoring tools to create, analyze, test, improve, and communicate various PBL models. The PLATE project can enable PBL practitioners to develop, understand, customize, and reuse PBL models at a high level by relieving the burdens of handling complex details to implement a PBL course. The research team believes that the project will stimulate the application and use of PBL in curricula with online learning practice by incorporating PBL support into popularly used e-learning platforms and by providing a repository of PBL models and courses.
-
-
-
Advance In Adaptive Modulation For Fading Channels
المؤلفون: Jehad HamamraSmart phones are becoming dominant handsets that are available to wireless technology users. Wireless access to internet is also becoming the default scenario with vast majority of internet users. The increasing demand for high speed wireless internet services makes current technologies meet their limit due to channel impairments. The conventional adaptive modulation technique (CAMT) is no longer helpful due to high data rate requirements of new technologies and wireless video streaming in addition to other applications such as downloading large files. The CAMT is one of the developed approaches that are considered a powerful techniques that are currently being used in advanced wireless communication systems such as long term evolution (LTE) technology. This technique is used to enhance energy efficiency and increase spectral efficiency of wireless communication systems over fading channels. The CAMT dynamically changes modulation schemes based on channel conditions to maximize throughput with minimum bit error rate (BER) based on channel state information of each user, which is sent back to transmitter by receiver side via reliable channel. The CAMT is based on predefined set of ranges signal to noise ratios (SNRs) for different orders of modulation schemes. The more increase in SNRs lead to higher level of ranges of SNRs set that lead to a higher modulation order. This will allow to higher transmission speed utilizing the good channel condition. In order to minimize BER, when channel condition degrades, modulation order is reduced, which result to lower spectral efficiency but more robust modulation scheme. The dynamicity of changing modulation order based on SNRs ranges of radio channel is the key part of CAMT to increase throughput and minimize the BER. However, this work proposes an advance in AMT that is based on utilizing more channel state information in addition to SNR ranges. The particular new information is related to how sever fading the channel experiences. The amount of severity in this work is measured with amount of fading (AF), which is computed by using the first and second central moments of envelope amplitude. This additional information helps to distinguish between different channel conditions that have same average set of SNR ranges but different levels of fading severity that may be utilized to increase performance of CAMT. The different levels of fading severity and similar sets of SNR ranges have been tested with Nakagami-m fading channels. The AF measure of fading severity is equal to 1/m in this radio fading channels. So, the investigation in this work is based on testing how to leverage the AF dimension in addition to the conventional approach used in CAMT. In this work we show that the BER of different modulation schemes depends on fading amount for every range of SNR defined by sets of AMT. Current results show dramatic improvements in BER performance and throughput when AF is leveraged with SNRs range set approach defined in CAMT. Utilization of AF with SNR ranges allow adapting higher modulation order in channel conditions that were not possible with conventional AMT.
-
-
-
Real-time multiple moving vehicle detection and tracking framework for autonomous UAV monitoring of urban traffic
المؤلفون: Mohamed ElHelwUnmanned Aerial Vehicles (UAVs) have the potential to provide comprehensive information for traffic monitoring, road conditions and emergency response. However, to enable autonomous UAV operations, video images captured by UAV cameras are processed by using state-of-the art algorithms for vehicle detection, recognition, and tracking. Nevertheless, processing of aerial UAV images is challenging due to the fact that the images are usually captured with low-resolution cameras, from high altitudes, and the UAV is in continuous motion. The latter enforces the need for decoupling the camera and scene motions and most of the techniques for moving vehicle detection perform ego motion compensation to separate camera motion from scene motion. To this end, registration of successive image frames is first carried out to match two or more images of the same scene taken at different times followed by moving vehicle labeling. Detected vehicles of interest are routinely tracked by the UAV. However, vehicle tracking in UAV imagery is challenging due to constantly changing camera vantage points, changes in illumination, and occlusions. The majority of the existing vehicle detection and tracking techniques suffer from reduced accuracy and/or entail intensive processing that prohibits their deployment onboard of UAVs unless intensive computational resources are utilized. This paper presents a novel multiple moving vehicle detection and tracking framework that is suitable for UAV traffic monitoring application. The proposed framework executes in real-time with improved accuracy and is based on image feature processing and projective geometry. FAST image features are first extracted and then outlier features are computed by using least median square estimation. Moving vehicles are subsequently detected with density-based spatial clustering algorithm. Vehicles are tracked by using Kalman filtering while an overlap-rate-based data association mechanism followed by tracking persistency check are used to discriminate between true moving vehicles and false detections. The proposed framework doesn't involve the explicit application of image transformations, i.e. warping, to detect potential moving vehicles which reduces computational time and decreases the probability of having wrongly detected vehicles due to registration errors. Furthermore, the use of data association to correlate detected and tracked vehicles along with the selective target's template update that's based on the data association decision, significantly improves the overall tracking accuracy. For quantitative evaluation, a testbed has been implemented to evaluate the proposed framework on three datasets: The standard DARPA Eglin-1 and the RedTeam datasets, and a home-collected dataset. The proposed framework achieves recall rates of 97.1 and 96.8 (average 96.9%), and precision rates of 99.1% and 95.8% (average 97.4%) for the Eglin-1 and RedTeam datasets, respectively, with overall average precision of (97.1%). And when evaluated on the dataset collected, it achieved 95.6% recall and 96.3% precision. Compared to other moving vehicle detection and tracking techniques found in the literature, the proposed framework achieves higher accuracy on average and is less computationally demanding. The quantitative results thus demonstrate the potential of the proposed framework for autonomous UAV traffic monitoring applications.
-
-
-
Mental task discrimination: Digital signal processing
المؤلفون: Mohammed YehiaAbstract Objectives in this research is to increase the accuracy of the distinction between functions of various Kulaip through a careful analysis of brain electrical signals and electrical signals are out of the brain and can be received from sensors, especially in plants and solid enlarge and stored. It is characteristic of brain signals Abbaas Electrical signals are weak force and the Pacific area affected by humans, so it is also loaded with signal noise changed from its true value. Of my goals in this research is to arrive to delete possible from the noise signal from the brain signals Electrical and represented in a precise manner by factors and components unique to each signal and then train the system to these signals and it means the function of certain mental, and after the training system signals available for training begin the testing phase where the system will signal a new mentality, including the system by comparing the store to his posts, and then classified into one of mental functions that are stored for him. I have in this research to distinguish between five mental functions are: Baseline task1-when a person is completely relaxed in the case of Multiplication task2-mind when it calculates the multiplication operation is not simple Letter composing task3-imagine when the person writing and the formation of a character in his mind Rotation task4-When the person imagine a three-dimensional model of rotation around the axis Counting task 5 - when you imagine the person that writes numbers in sequence This means that one of the 83.7813% and thank God I was able to distinguish the origin of the five functions correctly by the health 100 electrical signal to the brain can identify correctly the 84 signal and we determine the brain function that she made this reference. There are high hopes might build on this research where possible to increase the number of functions of the mind which distinguishes them we reach the distinction between all functions of the mind and know what to think of human, and also the feelings, and also of the objectives of this research is to try to help people with special needs through know what is going on in their minds and meet their needs. I hope to benefit from this research all interested in this area and reach a new height in the analysis and understanding of the functions of the human mind and offer all support to fellow human with special needs.
-
-
-
Macro-small handover in LTE under energy constraints
المؤلفون: Nizar ZorbaGreen communications have emerged as one of the most important trends in wireless communications because of its several advantages of interference reduction, battery life increase and electrical bill cut. Its application to handover mechanisms is a crucial operation for its integration in practical systems, as handover is one of the most resource-consuming operations in the system, and it has to be optimized under the objective of green communications. On the other hand, a decrease in the energy consumption should not mean lower performance for the operator and customers. Therefore, this work will present a hybrid handover mechanism where two conflict objectives of load balancing and energy consumption are tackled, where the operator's objective is to balance the data load among its macro and small-cells, while the user equipment objective is to decrease the consumed energy in order to guarantee a larger battery life.
-
-
-
A concurrent tri-band low-noise amplifier for multiband communications and sensing
المؤلفون: Cam NguyenConcurrent multiband receivers receive and process multiple frequency bands simultaneously. They are thus capable of providing multitask or multifunction to meet consumer needs in modern wireless communications. Concurrent multiband receivers require at least some of their components to operate concurrently at different frequency bands which results in substantial reduction of cost and power dissipation. Fig. 1 shows a simplified concurrent multiband receiver, typically consisting of an off-chip antenna and on-chip low-noise amplifier (LNA) and mixer. While the mixer can be designed as a multiband or wideband component, the LNA should perform as a concurrent multiband device and hence requires proper input matching to the antenna, low noise figure (NF), high gain and high linearity to handle multiple input signals simultaneously. Therefore, the design of concurrent multiband LNA's is the most critical issue for implementation of fully integrated low-cost and low-power concurrent multi-band receivers. In this talk, we present a 13/24/35-GHz concurrent tri-band LNA implementing a novel tri-band load composed of two feedback notch filters. The tri-band load is composed of two passive LC notch filters with feedback. The tri-band LNA fabricated on a 0.18-μm SiGe BiCMOS process achieves power gain of 22.3/24.6/22.2 dB at 13.5/24.5/34.5 GHz, respectively. It has the best noise figure of 3.7/3.3/4.3 dB and IIP3 of -17.5/-18.5/-15.6 dBm in the 13.5/24.5/34.5 GHz pass-band, respectively. The tri-band LNA consumes 36 mW from a 1.8 V supply, and occupies 920 μm × 500 μm.
-