- Home
- Conference Proceedings
- Qatar Foundation Annual Research Conference Proceedings
- Conference Proceeding
Qatar Foundation Annual Research Conference Proceedings Volume 2014 Issue 1
- Conference date: 18-19 Nov 2014
- Location: Qatar National Convention Center (QNCC), Doha, Qatar
- Volume number: 2014
- Published: 18 November 2014
321 - 340 of 480 results
-
-
Sensorless Direct Power Control Of Induction Motor Drive Using Artificial Neural Network
Authors: Abolfazl Halvaei Niasar, Hassan Moghbeli and Hossein Rahimi KhoeiThis paper proposes design of sensorless induction motor drive based on direct power control (DPC) technique. Principle of DPC technique is presented and possibilities of direct power control (DPC) for induction motors (IMs) fed by a space vector pulse-width modulation inverter (SV-PWM) are studied. The simulation results show that the DPC technique enjoys all advantages of pervious method such as fast dynamic and ease of implementation, without having the previous method problems. Some simulations are carried out for the closed-loop speed control systems under various load conditions to verify proposed methods. Results show that DPC of IMs works well with output power and flux control. Moreover, to reduce cost of drive and enhancing reliability, an effective sensorless strategy based on artificial neural network (ANN) is developed to estimate rotor's position and motor speed. Developed sensorless scheme is a new model reference adaptive system (MRAS) speed observer for direct power control of induction motor drives. The proposed MRAS speed observer uses the current model as an adaptive model. The neural network has been designed and trained online by employing a back propagation network (BPN) algorithm. The estimator is simulated in Matlab/Simulink. Simulation results confirm the performance of ANN-based sensorless DPC induction motor drive in various conditions.
-
-
-
Sensor Fault Detection And Isolation System
Authors: Cheng-ken Yang, Alireza Alemi and Reza LangariThe purpose of this paper is mainly aimed to provide an energy security strategy for the petroleum production and processing in the grand challenges. Fault detection and diagnosis is the central component of abnormal event management (AEM) [1]. According to the International Federation of Automatic Control (IFAC), a fault is defined as an unpermitted deviation of at least one characteristic property or parameter of the system from the acceptable/usual/standard condition [2-4]. Generally, there are three parts in a fault diagnosis system, detection, isolation, and identification [5, 6, 7]. Depending on the performance of fault diagnosis systems, they are called FD (for fault detection) or FDI (for fault detection and isolation) or FDIA (for fault detection, isolation and analysis) systems [5]. As the increasing needs for energy grows rapidly, energy security becomes an important issue especially in petroleum production and processing. The importance can be mainly considered in following perspectives: higher system performance, product quality, human safety, and cost efficiency [5, 8]. With this in mind, the purpose of this research is to develop a Fault Detection and Isolation (FDI) system which is capable to diagnosis multiple sensor faults in nonlinear cases. In order to lead this study closer to real world applications in oil industries, the system parameters of the applied system are assumed to be unknown. In the first step of the proposed method, phase space reconstruction techniques are used to reconstruct the phase space of the applied system. This step is aimed to infer the system property by the collected sensor measurements. The second step is to use the reconstructed phase space to predict future sensor measurements, and residual signals are generated by comparing the actually measured measurements to the predicted measurements. Since, in practice, residual signals will not perfectly equal to zero in the fault-free situation, Multiple Hypothesis Shiryayev Sequential Probability Test (MHSSPT) is introduced to further process those residual signals, and the diagnostic results are presented in probability. In addition, the proposed method is extended to a non-stationary case by using the conservation/dissipation property in phase space. The proposed method is examined by both of simulated data and real process data. The three tank model is modeled according to a nonlinear laboratory setup DTS200 and introduced for generating simulated data. On the other hand, the real process data collected from a sugar factory actuator system are also used to examine the proposed method. According to our results obtained from simulations and experiments, the proposed method is capable to indicate both of healthy and faulty situations. In the end, we have to emphasize that the proposed approach is not limited in the applications of petroleum production and processing. For example, our approach can also apply to enhance the quality of water and avoid the discharges, such as leakage, in the process of water resource management. Therefore, the proposed approach not only benefits the issue of energy security but also other issues in the grand challenges.
-
-
-
Accurate Characterization Of Bicmos Bjt Across Dc-67 Ghz With On-wafer Measurement And Em De-embedding
Authors: Juseok Bae, Scott Jordan and Nguyen CamThe complementary metal oxide semiconductor (CMOS) and bipolar-complementary metal oxide semiconductor (BiCMOS) technologies offer low power dissipation, good noise performance, high packing density, et cetera in analog and digital circuit design. They have contributed significantly to the advancement of wireless communication and sensing systems and are presently inevitable devices in these systems. As the technologies and device performance have advanced into the millimeter-wave regime over last two decades, accurate S-parameters of CMOS and BiCMOS devices at millimeter-wave frequencies are highly demanded for millimeter-wave radio-frequency integrated-circuit (RFIC) design. The accuracy of these S-parameters is absolutely essential for extracting accurately the device parameters and small- and large-signal models. The conventional extraction techniques using both impedance standard substrate and de-embedding technique have been replaced by on-wafer calibration techniques implementing calibration standards fabricated on the same wafer together with the device under test (DUT) in virtue of accurate characterization over wide frequency and at high frequencies. However, some challenges for the on-wafer calibration still remain where the calibration is conducted over a wide frequency range covering millimeter-wave frequencies with a DUT such as bipolar junction transistor (BJT). The ends of the interconnects for the open and load standards are inherently very close to each other since it depends on the spacing between base (or collector) and emitter of the BJT (about 0.25 μm), hence not only causing significant gap and open-end fringing capacitances, which leads to substantial undesired effects for device characterization at millimeter-wave frequencies, but also making it impossible to place resistors within such narrow gaps for the load standard design. In order to resolve the structural issue of the conventional on-wafer calibration standards, a new method implementing both on-wafer calibration and electromagnetic (EM)-based de-embedding has been developed. In the newly developed technique, appropriate spacing in the on-wafer calibration standards, which minimizes the parasitic capacitance between the close open-ends and sets enough space to place the load standard's resistors, is determined based on EM simulations, and the non-calibrated part within the spacing consisting of interconnects and vias is extracted by the EM-based de-embedding. The developed procedure with the on-wafer calibration and the EM-based de-embedding characterizes the S-parameters of BJTs in 0.18-µm SiGe BiCMOS technology from DC to 67 GHz. The measured results show sizable differences in insertion loss and phase between the on-wafer characterizations with and without the EM-based de-embedding, demonstrating that the developed on-wafer characterization with EM-based de-embedding is needed for accurate characterization of devices at millimeter-wave frequencies, which is essential for the design of millimeter-wave wireless communication and sensing systems.
-
-
-
Visual Scale-adaptive Tracking For Smart Traffic Monitoring
Authors: Mohamed Elhelw, Sara Maher and Ahmed SalaheldinThis paper presents a novel real-time scale adaptive visual tracking framework and its use in smart traffic monitoring where the framework robustly detects and tracks vehicles from a stationary camera. Existing visual tracking methods often employ a semi-supervised appearance modeling where a set of samples are continuously extracted around the vehicle to train a discriminant classifier between the vehicles and the background. While proving their advantage, many issues are still to be addressed. One is a tradeoff between high adaptability (prone to drift) and preserving original vehicle appearance (susceptible to tracking loss with significant appearance variations). Another issue is vehicle scale changes due to perspective camera effect which increases the potential for an inaccurate update and subsequently visual drifting. Still, scale adaptability has received little attention in vision-based discriminant trackers. In this paper we propose a three-step Scale Adaptive Object Tracking (SAOT) framework that adapts to scale and appearance changes. The framework is divided into three phases: (1) vehicle localization using a diverse ensemble, (2) scale estimation, and (3) data association where detected and tracked vehicles are correlated. The first step computes vehicle position by using an ensemble based on a compressed low-dimensional feature subsets projected from high-dimension feature space by random projections. This provides the diversity needed to accommodate for individual classifiers errors and different adaptability rates. The scale estimation step, applied after vehicle localization, is computed based on matched points between a pre-stored template and the localized vehicle. This doesn't only estimate the new scale of the vehicle but also serves as a correction step to prevent drifting by estimating the displacements between correspondences. The data association step is subsequently applied to link detected vehicle of current frame with the tracked vehicles. Data association must consider factors like the absence of detected target, false detections and ambiguity. Figure 1 illustrates the framework in operation. While the vehicle detection phase is executed per frame, the continuous tracking procedure ensures that all vehicles in the scene, no matter how complex it is, are correctly accounted for. The performance of the proposed Scale Adaptive Object Tracking (SAOT) algorithm is further evaluated with a different set of sequences with scale and appearance changes, blurring, moving camera and illumination. SAOT was compared to three established trackers in the literature: Compressive Tracking , Incremental Learning for Visual Tracking and Random Project with Diverse Ensemble Tracker using standard visual tracking evaluation datasets [4]. The initial target position for all sequences was initialized using manual ground truth. Centre Location Error (CLE) and Recall are calculated to evaluate the methods. Table 1 presents the CLE errors and recall in parentheses measured on a set of 2 sequences with different challenges. It clearly demonstrates that SAOT performs better than the other trackers.
-
-
-
“ElectroEncephaloGram “EEG” Mental Task Discrimination”,digital Signal Processing -- Master Of Science , Cairo University
More Less"ElectroEncephaloGram "EEG" Mental Task Discrimination", Master of Science dissertation, Cairo University, 2010. Recent advances in computer hardware and signal processing have made possible the use of EEG signals or "brain waves" for communication between humans and computers. Locked-in patients have now a way to communicate with the outside world, but even with the last modern techniques, such systems still suffer communication rates on the order of 2-3 tasks/minute. In addition, existing systems are not likely to be designed with flexibility in mind, leading to slow systems that are difficult to improve. This Thesis is classifying different mental tasks through the use of the electroencephalogram (EEG). EEG signals from several subjects through channels (electrodes) have been studied during the performance of five mental tasks: Baseline task for which the subjects were asked to relax as much as possible, Multiplication task for which the subjects were given nontrivial multiplication problem without vocalizing or making any other movements, Letter composing task for which the subject were instructed to mentally compose a letter without vocalizing (imagine writing a letter to a friend in their head),Rotation task for which the subjects were asked to visualize a particular three-dimensional block figure being rotated about its axis, and Counting task for which the subjects were asked to imagine a blackboard and to visualize numbers being written on the board sequentially. The work presented here maybe a part of a larger project, whose goal is to classify EEG signals belonging to a varied set of mental activities in a real time Brain Computer Interface, in order to investigate the feasibility of using different mental tasks as a wide communication channel between people and computers.
-
-
-
Designing A Cryptosystem Based On Invariants Of Supergroups
Authors: Martin Juras, Frantisek Marko and Alexander ZubkovThe work of our group falls within the area of Cyber Security, which is one of Qatar's Research Grand Challenges. We are working on designing a new public key cryptosystem that can improve the security of communication networks. The most widely used cryptosystem at present (like RSA) are based on the difficulty of factorization of numbers that are constructed as product of two large primes. The security of such systems was put in doubt since these systems can be attacked with a help of quantum computers. We are working on a new cryptosystem that is based on different (noncommutative) structures, like algebraic groups and supergroups. Our system is based on the difficulty of computing invariants of actions of such groups. We have designed a system that uses invariants of (super)tori of general linear (super)groups. Effectively, we are building a "trapdoor function" enabling us to find a suitable invariant of high degree and do the encoding of the message quickly and efficiently but which provides an attacker with computationally very expensive and difficult task to find an invariant of that high degree. As with every cryptosystem, the possibility of its break have to be scrutinized very carefully and the system has to be investigated independently by other researchers. We have established theoretical results about minimal degrees of invariants of a torus that are informing possible selection of parameters of our system. We continue getting more general theoretical results and we are working towards an implementation and testing of this new cryptosystem. A second part of our work is an extension from the classical case of algebraic groups to the case of algebraic supergroups. We are concentrating especially on general linear supergroups. We have described the center of the distribution superalgebras of general linear groups GL(m|n) using the concept of an integral in the sense of Haboush and computed explicitly all generators of invariants of the adjoint action of the group GL(1|1) on its distribution algebra. The center of the distribution algebra is related via the Harish-Chandra map to infinitesimal characters. Understanding of these characters and blocks would lead us to the description of the linkage principle, that is of composition factors of induced modules. Finding and proving linkage principle for supergroups over the field of positive characteristics is one of our main interests. This extends classical results from the representation theory that are giving scientists, mathematicians and physicists, a tool to find a theoretical model where the fundamental rules of symmetries of the space-continuum are realized. Better theoretical background could lead to better understanding of the experimental data and predictions confirming or contradicting our current understanding of the universe. As happened many times in the past, finding the right point of view and developing new language can often lead to different level of understanding. Therefore we value the theoretical part of our work the same way as the practical work related to the cryptosystems.
-
-
-
Experimental Results On The Performance Of Visible Light Communication Systems
Authors: Mohamed Kashef, Mohamed Abdallah and Khalid QaraqeEnergy efficient wireless communication networks become essential due to the associated environmental and financial benefits. Visible light communication (VLC) is a promising candidate to reach energy efficient communications. Light emitting diodes (LEDs) have been introduced as energy efficient light sources and their light signal intensity can be modulated to transfer data wirelessly. As a result, VLC using LEDs can be considered an energy efficient solution that exploits the illumination energy, which is already consumed, in data transmission. We set up a fully operative VLC system testbed composed of both the optical transmitters and receivers. We use this system in testing the performance of VLC communication systems. In this work, we discuss the results obtained by running the experiment using different system parameters. We apply different signaling schemes for the LED transmitter including optical orthogonal frequency division modulation (O-OFDM). We also validate our previously obtained analytical results for applying power control in VLC cooperative networks.
-
-
-
Cerebral Blood Vessel Segmentation Using Gauss-hermite Quadrature Filtering And Automatic Seed Selection
More LessBackground & Objective: Blood vessel segmentation has various applications such as proper diagnosis, surgical planning, and simulation. However, the common challenges faced in blood vessel segmentation are mainly vessels of varying width and contrast changes. In this paper, we propose a segmentation algorithm, where: (1) a histogram-based approach is proposed to determine the initial patch (seeds) and (2) on this patch, a Gauss- Hermite quadrature filter is applied across different scales to handle vessels of varying width with high precision. Subsequently, a level set method is used to perform segmentation on the filter output. Methods: In spatial domain, a Gauss-Hermite quadrature filter is basically a complex filter pair, where the real component is a line filter that can detect linear structures, and the imaginary component is an edge filter that can detect edge structures; the filter pair is used for combined line-edge detection. The local phase is the argument of the complex filter response that determines the type of structure (line/edge), and the magnitude of the response determines the strength of the structure. Since the filter is applied in different directions, all filter responses are then combined to produce an orientation invariant phase map by summing filter responses for all directions. We use 6 filters with center frequency pi/2. To handle vessels of varying width, a multi-scale integration approach is implemented. Vessels of different width appear both as lines and edges across different scales. These scales are combined to produce a global phase map that is used for segmentation. The resulting global phase map contains detailed information about line and edge structures. For blood vessel segmentation, a local phase of 90 degree indicates edge structures. Therefore, it is necessary to consider only the real part of the quadrature filter response. Edges will be found at zero crossing whereas positive and negative values will be obtained for inside and outside of line structures. Therefore, level set (LS) approach is utilized that uses the real part of the phase map as a speed function to drive the deforming contour towards the vessel boundary. In this way, the blood vessel boundary gets extracted. An initial patch on the desired image object is a requirement in this algorithm to start calculating the local phase map. It is obtained by first selecting a few possible partitions using peaks (local maxima) in the intensity histogram. Then, optimal number of seeds is obtained by an iterative clustering of these peaks using their histogram heights and grey scale difference. The seeds around the object form the patch. Results & Conclusion: The proposed method has been tested on 6 subjects of Head MRT Angiography having resolution 416×512×112. We use 6 filters of size 7×7×7 and 4 scales in this experiment. The average time required by MATLAB R14 to perform segmentation is 3 m for one subject by a 2 GB RAM and core2duo processor (without optimization). The resulted segmentation is promising and robust in terms of boundary leakage as can be observed from the Figure.
-
-
-
Self-learning Control Of Thermostatically Controlled Loads: Practical Implementation Of State Of The Art In Machine Learning.
More LessOptimal control of thermostatically controlled loads such as air conditioning and hot water storage plays a pivotal role in the development of demand response. Demand response being an enabling technology in a society with an increased electrification and growing production from intrinsically stochastic renewable energy. Optimal control however, often relies on the availability of a system model in combination with an optimizer, a popular approaches being model predictive control. Building such a controller, is considered a cumbersome endeavor requiring custom expert knowledge, making large scale deployment of similar solutions challenging. To this end we propose to leverage on recent developments in machine learning, enabling a practical implementation of a model-free controller. This model-free controller interacts with the system within safety and comfort constraints and learns from this interaction to make near-optimal decisions. All of this within a limited convergence time on the order of 20-40 days. When successful, self-learning control allows for a large scale cost-effective deployment of demand response applications supporting a future with increased uncertainty in the energy production. To be more precise, recent results in the field of batch reinforcement learning and regression algorithms such as extremely randomized trees open the door for practical implementations. To support this, we intend in this work to show our most recent experimental results in implementing generic self-learning controllers for thermostatically controlled loads showing that indeed near optimal policies can be obtained within a limited time.
-
-
-
Self-powered Sensor Systems Based On Small Temperature Differences: Potential For Building Automation And Structural Health Monitoring
Authors: Jana Heuer, Hans-fridtjof Pernau, Martin Jägle, Jan D. König and Kilian BartholoméSensors are the eyes and ears in the service of people - especially in inaccessible areas where regular maintenance or battery replacement is extremely difficult. By using thermoelectric generators, which are capable of directly converting heat flux into electrical energy, self-powered sensor systems can be established wherever temperature differences of a few Kelvin exist. After installation of the sensors, they collect and transmit their data without any need for further maintenance like battery replacement. Intelligent building automation for instance is the key for significant energy reduction in buildings. Through precise control of sun blinds and set temperature for thermostats and air conditioning, radio signal sensors help to increase a building's efficiency massively. Thermoelectric self-powered sensors have the additional potential to introduce more flexibility in the area of building technology as complex wiring is avoided. Buildings can easier be adapted to altered utilization. Structural health monitoring is another field where energy autarkic sensors could be of vital use. In-situ measurements of e.g. temperature, humidity, strain and cracks of buildings are essential in order to determine the condition of construction materials. Respective sensors are hardly accessible, wiring or battery replacement is costly or even impossible. Sensors that are driven by thermoelectric generators are maintenance-free and can help enhancing the longevity of buildings as well as reducing the maintenance costs. Furthermore, leakages in water transport systems can be reduced by in-situ monitoring by self-powered sensors and thus, reduce unnecessary water losses. The large progress in the development of low-power sensors, power management and radio communication combined with the availability of high efficiency thermoelectric generators opens the possibility to run a self-powered sensor node with temperature gradients as low as 0.8K. This potential will be presented with respect to selected fields of application.
-
-
-
Smart Consumers, Customers And Citizens: Engaging End-users In Smart Grid Projects
Authors: Pieter Valkering and Erik LaesThere is no smart grid without a smart end-user! Smart grids will be essential in future energy systems to allow for major improvements in energy efficiency and for the integration of solar energy and other renewables into the grid, thereby contributing to climate change mitigation and environmental sustainability at large. End-users will play a crucial role in these smart grids that aim to link end-users and energy providers in a better balanced and more efficient electricity system. The success of smart grids depends on effective active load and demand side management facilitated by appropriate technologies and financial incentives, requiring end-user, market and political acceptance. However, current smart grid pilot projects typically focus on technological learning and not so much on learning to understand consumer needs and behaviour in a connected living environment. The key question thus remains: how to engage end-users in smart grid projects so as to satisfy end-user needs and stimulate active end-user participation, thereby realizing as much as possible the potential of energy demand reduction, energy demand flexibility, and local energy generation? The aim of European S3C project (www.s3c-project.eu) is to further understanding of engaging end-users (households and SMEs) in smart grid projects and ways this may contribute to the formation of new 'smart' behaviours. This research is based upon three key pillars: 1) the analysis of a suite of (recently finished or well-advanced) European smart-grid projects to assess success factors and pitfalls; 2) the translation of lessons learned to the development of concrete engagement guidelines and tools, and 3) the further testing of the guidelines and tools in a collection of ongoing smart grid projects leading to a finalized 'toolbox' for end-user engagement. Crucially, it differentiates findings for three key potential end-user roles: 'Consumer' (a rather passive role primarily involving energy saving), 'Customer' (a more active role offering demand flexibility and micro-scale energy production), and 'Citizen' (the most pro-active role involving community-based smart grid initiatives). Within this context, this paper aims to deliver a coherent view on current good practice in end-user engagement in smart grid projects. Starting from a brief theoretical introduction, it highlights the success factors - like underscoring the local character of a smart energy projects - and barriers - like the lack of viable business cases - the S3C case study analysis has revealed. It furthermore describes how these insights are translated into a collection of guidelines and tools on topics such as understanding target groups, designing adequate incentives, implementing energy monitoring systems, and setting up project communication. Also, an outlook towards future testing of those guidelines and tools within on-going smart grid projects is given. Consequently, we argue for each one of the three typical end-user roles which principles of end-user engagement should be considered good (or bad) practice. We conclude with highlighting promising approaches for end-user engagement that require further testing, as input for a research agenda on end-user engagement in smart grids.
-
-
-
Illustrations Generation Based On Arabic Ontology For Children With Intellectual Challenges
Authors: Abdelghani Karkar, Amal Dandashi and Jihad Al Ja'amDigital devices and computer software have the prospect to help children with intellectual challenges (IC) in learning capabilities, profession growth, and self-consciousness living. However, most tools and existing software applications that these children utilize are prepared without observance of their particular deficiency. We conduct an Arabic ontology-based learning system that presents automatically illustrations to characterize the content of stories for children with IC in the state of Qatar. We utilize different mechanisms in order to produce these illustrations which comprise: Arabic natural language processing, animal domain-based ontology, word-to-word based relationship extraction, automatic online search-engine querying. The substantial purpose of our proposed system is to ameliorate children with IC the educational, comprehension, perception, and reasoning through the generated illustrations.
-
-
-
Application Of Design For Verification To Smart Sensory Systems
Authors: Mohammed Gh Al Zamil and Samer SamarahWireless Sensor Networks (WSNs) have unleashed researchers and developers to propose a series of smart systems that serve the needs of societies and enhance the quality of services. WSNs consist of a set of sensors that sense the environmental variables, such as temperature, humidity, speed of objects, and report them back to a central node. Although such architecture seems simple, it is lack of many limitations that might affect its scalability, modularity of coded programs, and correctness in terms of synchronization problems such as nested monitor lockouts, missed or forgotten notifications, or slipped conditions. This research investigated the application of design for verification approach in order to come with a design for verification framework that takes into account the specialized features of smart sensory systems. Such contribution would facilitate 1) verifying coded programs to detect temporal and concurrent problems, 2) automating the verification process of such complex and critical systems, and 3) modularizing the coding of these systems to enhance their scalability. Our proposed framework relies on separating the design of the system's interfaces from the coded body; separation of concerns. For instance, we are not looking for recompiling the coded programs but, instead, we are looking for discovering design errors resulted from the concurrent temporal interactions among different programming objects. For this reason, our proposed framework adapts the concurrency controller design pattern to model the interactions modules. As a result, we were able to study the interactions among different actions and automatically recognize the transitions among them. Therefore, such recognition guarantees building a finite-state automaton that formulate the input description to a model-checker to verify some given temporal properties. To evaluate our proposed methodology, we have verified a real-time smart irrigation system that consists of a set of different sensors, which are controlled by a single controller unit. The system has already installed at our research field to control the irrigation process for the purpose of saving water. Further, we designed a set of specifications, temporal specifications, to check whether the system confirms to these specification during the interactions among heterogeneous sensors or not. If not, the model-checker returns a counter example of a sequence of states that violate a given specification. Thus, the counter example would be a great gift to fix the design error, which would minimize the risk of facing such error during run-time. The results showed that applying the proposed framework facilitates producing scalable, modular, and error-free sensory systems. The framework allowed us to detect critical design errors and fix them before deploying the smart sensory system. Finally, we also were able to check the power consumption model of our installed sensors and the effect of data aggregation on saving more power during future operations.
-
-
-
NEGATIVE FOUR CORNER MAGIC SQUARES OF ORDER SIX WITH A BETWEEN 1 AND 5
More LessIn this paper we introduce and study special types of magic squares of order six. We list some enumerations of these squares. We present a parallelizable code. This code is based on the principles of genetic algorithms. A magic square is a square matrix, where the sum of all entries in each row or column and both main diagonals yields the same number. This number is called the magic constant. A natural magic square of order n is a matrix of size n×n such that its entries consist of all integers from one to square of n. We define a new class of magic squares and present some listing of the counting carried out over two years.
-
-
-
Arabic Natural Language Processing: Framework For Translative Technology For Children With Hearing Impairments
Authors: Amal Dandashi, Abdelghani Karkar and Jihad AljaamChildren with hearing impairments (HI) often face many educational, communicational and societal challenges. Arabic Natural Language Processing can be used to develop several key technologies that may alleviate cognitive and language learning difficulties children with HI face in the Arab world. In this study, we propose a system design that provides the following component functionalities: (1) Multimedia translation elements that can be dynamically generated based on Arabic text, (2)3D Avatar based text-to-video translation (from Arabic text to Qatari Sign Language),involving manual and non manual signals, (3)Emergency phone based system that translates Arabic text to Qatari Sign Language Video and vice versa, and (4) Multi component system designed to be mobilized and used on various platforms. This system involves the use of Arabic Natural Language Processing, Arabic word and video Ontologies, and customized engine querying. The objective of the presented system framework is to provide translational and cognitive assistive technology to individuals with HI and empower their autonomous capacities.
-
-
-
Optimized Search Of Corresponding Patches In Multi-scale Stereo Matching: Application To Robotic Surgery
Authors: Amna Alzeyara, Jean-marc Peyrat, Julien Abinahed and Abdulla Al-ansariINTRODUCTION: Minimally-invasive robotic surgery benefits the surgeon with increased dexterity and precision, more comfortable seating, and depth perception. Indeed, the stereo-endoscopic camera of the daVinci robot provides the surgeon with a high-resolution 3D view of the surgical scene inside the patient body. To leverage this depth information using advanced computational tools (such as augmented reality or collision detection), we need a fast and accurate stereo matching algorithm, which computes the disparity (pixel shift) map between left and right images. To improve this trade-off between speed and accuracy, we propose an efficient multi-scale approach that overcomes standard multi-scale limitations due to interpolation artifacts when upsampling intermediate disparity results from coarser to finer scale. METHODS: Standard stereo matching algorithms perform an exhaustive search of the most similar patch between the reference and target images (along the same horizontal line when images are rectified). This requires a wide search range in the target image to ensure finding the corresponding pixel in the reference image (Figure 1). To optimize this search, we propose a multi-scale approach that uses the disparity map resulting from previous iteration at lower resolution. Instead of directly using the pixel position in the reference image to place the search region in the target image, we shift it by the corresponding disparity value from previous iteration and reduce the width of the search region as it is expected to be closer to the optimal solution. We also add two additional search regions shifted by disparity values at left and right adjacent pixel positions (Figure 2) to avoid errors typically related to interpolation artifacts when resizing disparity map. To avoid important overlaps between different search regions, we only add them where the disparity map has strong gradients. MATERIAL: We used stereo images from the Middlebury dataset (http://vision.middlebury.edu/stereo/data/) and stereo-endoscopic images captured at full HD 1080i resolution using a daVinci S/Si HD Surgical System. Experiments were performed with a GPU implementation on a workstation with 128GB RAM, an Intel Xeon Processor E5-2690, and an NVIDIA Tesla C2075. RESULTS: We compared the accuracy and speed between standard and proposed methods using ten images from the Middlebury dataset that has the advantage to provide ground truth disparity maps. We used the sum of square difference (SSD) as a similarity metric between patches of size 3x3 in left and right rectified images, resized to half their original size (665x555). For the standard method, we set the search range offset and width to respectively -25 and 64 pixels. For the proposed method, we initialize the disparity to 0 followed by five iterations with a search range width of 16. Results in Table 1 show that we managed to improve the average accuracy by 27% without affecting the average computation time of 120ms. CONCLUSION: We proposed an efficient multi-scale stereo matching algorithm that significantly improves accuracy without compromising speed. In future work, we will investigate the benefits of a similar approach using temporal consistency between successive frames and its use in more advanced computational tools for image-guided surgery.
-
-
-
On The Use Of Pre-equalization To Enhance The Passive Uhf Rfid Communication Under Multipath Channel Fading
Authors: Taoufik Ben Jabeur and Abdullah KadriOn the Use of Pre-Equalization to Enhance the Passive UHF RFID Communication under Multipath Channel Fading Dr. Taoufik Ben-Jabeur & Dr. Abdullah Kadri Qatar Mobility Innovations Center (QMIC), Qatar Science and Technology Park, Doha, Qatar taoufikb,[email protected] Background: We consider a monostatic passive UHF RFID system where it is composed from one RFID reader with one antenna, for transmission and reception, and RF tags. The energy of the continuous signals transmitted by the RFID reader is used to power up the internal circuitry of the RF tags and to backscatter their information to the reader. In the passive UHF RFID, we note the absence of source of the energy other than that is coming from the continuous wave. Experiences show that the presence of the multipath channel fading can reduce dramatically the received power at the tag. Therefore, the received energy isn't sufficient to power up the RF tag. To remedy this problem, we propose a pre-equalizer applied on the transmitted reader in order to maintain a useful received power able to power-up the tag. Objectives: This work aims to design a specific a pre-equalizer for passive UHF RFID systems able to combat the effect of the multipath channel fading and then maintain a high received power on the tag. Methods: a.First stage, we assume the knowledge of the multipath channel fading and the continuous wave. Then, the pre-equalizer is designed for a fixed Rayleigh multipath channel in order to maximize the energy of the receiver signal on the tag. b.Proprieties are extracted from the designed pre-equalizer associated to the fixed channel. c.More general equalizer uses these proprieties to design an equalizer that can be applied for any unknown multipath Rayleigh channel. Simulation results: Simulation results are provided to show that the proposed pre-equalizers allow combating the effect of the multipath channel fading and thus increasing the received power at the RF tag. The energy consumption of the tag is still the same and all operations are made at the RFID reader side.
-
-
-
Determination Of Magnetizing Characteristic Of A Single-phase Self Excited Induction Generator
Authors: Mohd Faisal Khan, Mohd Rizwan Khan, Atif Iqbal and Moidu ThavotThe magnetizing characteristic of a Self Excited Induction Generator (SEIG) defines relationship between its magnetizing reactance and air-gap voltage. The characteristic is essential for steady state, dynamic or transient analysis of SEIGs as the magnetizing inductance is the main factor responsible for voltage build-up and its stabilization in these machines. In order to obtain essential data to get this characteristic the induction machine is subjected to synchronous speed test. The data yielded by this test can be utilised to extract complete magnetizing behaviour of the test machine. In this paper a detailed study is carried out on a single phase induction machine to obtain its magnetizing characteristic. The procedure of performing synchronous speed test to record necessary data has been explained in detail along with relevant expressions for the calculation of different parameters. The magnetizing characteristic for the investigated machine is reported in the paper.
-
-
-
Control Of Packed U-cell Multilevel Five-phase Voltage Source Inverter
Authors: Atif Iqbal, Mohd Tariq, Khaliqur Rahman and Abdulhadi Al-qahtaniA seven level five-phase voltage source inverter with packed U-cell topology is presented in this paper. This is called Packed U-cell because each unit of the inverter is of shape U. Fig. 1 presents a five-phase seven-level inverter power circuit configuration using Packed U-cell. Depending upon the number of capacitors in the investigated topology different level of voltages can be achieved. In the presented topology two capacitors have been used to obtain seven levels (Vdc, 2Vdc/3, Vdc/3, 0, - Vdc/3, -2Vdc/3, -Vdc ). The Voltage across second capacitor (C) must be maintained at one-third of the voltage of the dc link.
-
-
-
An Ultra-low-power Processor Architecture For High-performance Computing And Other Compute-intensive Applications
Authors: Toshikazu Ebisuzaki and Junichiro MakinoGRAPE-X processor is an experimental processor chip designed to achieve extremely high performance per watt. It was made using TSMC's 28 nm technology, and has achieved 30 Gflops/W. This number is three times higher than the performance of best GPGPU cards announced so far, using the same 28 nm technology. The power consumption has been the main factor which limits the performance improvement of HPC systems. This is because of the break of the so-called CMOS scaling law. Until early 2000's, or when the design rule of the silicon device was larger than 130nm, shrinking the transistor size by a factor of two results in: for times more transistors, two times higher clock frequency, half the supply voltage, and the same power consumption. Thus, one could achieve 8x performance improvement. However, with transistors smaller than 130nm design rules, it has become difficult to reduce the supply voltage, resulting in only a factor-of-two performance improvement for the same power consumption. As a result, reduction in the power consumption of the processor, when it is fully in operation, has become the most important issue. In addition, it has also become important to achieve high parallel efficiency on relatively small-sized problems. With large parallel machines, high peak performance is realized, but that peak performance is in many cases not so useful, since it is achieved only for unrealistically large problems. For the problems of practical interest, the efficiencies of large scale parallel machines are sometimes surprisingly low. In order to achieve high performance-per-watt and high parallel efficiency on small problems, we developed a highly streamlined processor architecture. In order to reduce the communication overhead and improve parallel efficiency, we adopted an SIMD architecture. To reduce the power consumption, we adopted the distributed-memory-on-chip architecture, in which each of SIMD processor core has its own main memory. Based on the GRAPE-X architecture, an exa-flops (10^18 flops) system with the power consumption less than 50 MW will be possible in 2018-2019 time-frame. For many real applications including those in the cyber security area, which requires 10TB or less memory, a parallel system based on our GRAPE-X architecture will provide the highest parallel efficiency and the shortest time to the solution at the same time. Oral presentation is preferred
-