- Home
- Conference Proceedings
- Qatar Foundation Annual Research Conference Proceedings
- Conference Proceeding
Qatar Foundation Annual Research Conference Proceedings Volume 2018 Issue 3
- Conference date: 19-20 Mar 2018
- Location: Qatar National Convention Center (QNCC), Doha, Qatar
- Volume number: 2018
- Published: 15 March 2018
21 - 40 of 77 results
-
-
Variable Message Sign strategies for Congestion Warning on Motorways
Authors: Wael Khaleel Mohammad Alhajyaseen, Nora Reinolsmann, Kris Brijs and Tom Brijs1. Introduction Motorways are the safest roads by design and regulation. Still, motorways in the European Union accounted for nearly 27,500 fatalities from 2004 to 2013 (Adminaite, Allsop, & Jost, 2015). The likelihood of the occurrence of rear-end collisions increases with higher traffic densities. This is alarming considering the proportion of traffic on motorways has increased over the past decade (Adminaite et al., 2015). The initiation of traffic congestion is characterized by changing flow conditions which can pose a serious safety hazard to drivers (Marchesini & Weijermars, 2010). Especially, hard congestion tails force drivers to change from motorway speed to stopped conditions, which can result in severe rear-end crashes (Totzke, Naujoks, Mühlbacher, & Krüger, 2012). Fatalities and injuries due to motorway crashes represent a threat to public health and should be reduced as much as possible. 2. Congestion warning and VMS The effects of congestion on safety generally depend on the extent to which drivers are surprised by the congestion. The type of congestion, the location of the queue, and the use of variable message signs to warn drivers in advance can influence whether drivers are able to decelerate safely or not (Marchesini & Weijermars, 2010). Variable message signs (VMS) are considered one of the primary components of Intelligent Transportation Systems (ITS) and provide motorists with route-specific information or warnings. The advantage of VMS is that they can display traffic state messages dynamically and in real time. Accordingly, VMS can reduce uncertainties and prepare drivers to anticipate and safely adapt to a traffic event (Arbaiza & Lucas-Alba, 2012). The Easyway II Project is one of the important guidelines for VMS harmonization in Europe that have been developed to update and improve current VMS signing practices. Despite this effort towards harmonization, still a broad variety of sign designs, message types and field placements are applied to warn drivers about congestions tails. Also, empirical research testing the available guidelines provides inconsistent findings. Hence, further scientific research is needed to shed more light on the effectiveness of different VMS types, message designs, and placement to influence save driving performance. 3. Objectives Available guidelines suggest that advance warning messages should be placed at 1 km, 2 km, and 4 km prior to a traffic event if the purpose is to allow drivers to anticipate safely (i.e., tactical use of VMS), and no further than 10km prior to a traffic event when the purpose is to influence route choice, rather than driver behavior (i.e., strategic use of VMS) (Evans, 2011; Federal Highway Administration, 2000). Gantry overhead signals and cantilever side poles are the most common VMS types. The Easyway guidelines contain different formats for congestion warning messages, namely, messages containing a) pictograms of congestion with or without a redundant text unit, b) a maximum of 4 information units, and c) with or without distance information (Arbaiza & Lucas-Alba, 2012). The objective of this study was to analyze the effect of different congestion warning VMS formats on visual- and driver behavior on motorways leading to a hard congestion tail. To that purpose, we used a driving simulator to observe accidents, speed and deceleration, and an eye tracker to monitor gaze fixations. 4. Method Data of thirty-six drivers (male and female) with an average age of 43 years were collected. We implemented a within-subject design with all participants exposed to seven VMS scenarios in randomized order. The apparatus used was the driving simulator of the Transportation Research Institute (IMOB, UHasselt), which is a ‘medium-fidelity’ simulator (STISIM M400; Systems Technology Incorporated) with a ‘fixed-base’ logging a wide range of driving parameters. The mock-up consists of a Ford Mondeo with a steering wheel, direction indicators, brake pedal, accelerator, clutch, and manual transmission. The virtual environment is visualized through three projectors on a 180° screen including three rear-view mirrors. Furthermore, we used the eye tracking system FaceLAB 5.0 to record eye movements. The eye tracker was installed on the dashboard of the driving cab and accommodated head rotations of +/-45 ° and gaze rotations of +/-22 ° around horizontal-axis. 5. Results We found that drivers with higher initial speeds stop closer to the congestion tail and are more likely to have a rear-end crash. A gantry-mounted congestion warning with a pictogram and the word “congestion” presented at a distance of 1km resulted in lowest mean speeds and smoothest deceleration for all drivers. A congestion warning at a distance of more than 3km had no effect on driver behavior in the critical zone before the congestion tail. Eye fixations for gantry mounted VMS were more frequent, but shorter in time as compared to cantilevers. Finally, the imposed visual load on drivers increased with more information units on the VMS. 6. Conclusion The distance between the congestion warning and the actual congestion tail is a crucial aspect when it comes to the effectiveness this kind of VMS. VMS congestion warnings located too far away lose their effect in the critical approaching zone, and VMS congestion warnings located too close might compromise safe deceleration. A gantry-mounted congestion warning displaying the word ‘congestion’ together with a pictogram located at 1km before the congestion tail was clearly noticed from all lanes without imposing too much visual load, and had the best impact on speed, resulting in smooth deceleration and safe stopping distances. In contrast, a congestion warning located more than 3km from the actual congestion tail had no safety effect as drivers started to speed up again before reaching the critical approaching zone. 7. Acknowledgment This publication was made possible by the NPRP award [NPRP 9-360-2-150] from the Qatar National Research Fund (a member of The Qatar Foundation). The statements made herein are solely the responsibility of the author[s]. 8. References Adminaite, D., Allsop, R., & Jost, G. (2015). ETSC - RANKING EU PROGRESS ON IMPROVING MOTORWAY SAFETY PIN Flash Report 28, (March). https://doi.org/10.1016/j.trf.2014.06.016 Arbaiza, A., & Lucas-Alba, A. (2012). Variable Message Signs Harmonisation PRINCIPLES OF VMS MESSAGES DESIGN Supporting guideline, (December), 1–60. Evans, D. (2011). Highways Agency policy for the use of Variable Signs and Signals (VSS), (December). Federal Highway Administration. (2000). CHAPTER 2E. GUIDE SIGNS - FREEWAYS AND EXPRESSWAYS, 1–82. Marchesini, P., & Weijermars, W. (2010). The relationship between road safety and congestion on motorways. SWOV Institute for Road Safety Research, 28. https://doi.org/R-2010-12 Totzke, I., Naujoks, F., Mühlbacher, D., & Krüger, H. P. (2012). Precision of congestion warnings: Do drivers really need warnings with precise information about the position of the congestion tail. Human Factors of Systems and Technology, 235–247.
-
-
-
Qatar Meteorology Department Security Enhancement Recommendations
More LessThe Internet has become part of almost every organizational culture where security threats increase with the increase use of the Internet. Therefore, security practice has become increasingly important for almost every organization where is the same case of the Qatar Meteorology Department (QMD) in the state of Qatar. The aim of this research is to evaluate the current security level of the QMD by examining the current security practices that present in the organization and the security awareness level among the organization employees. After that, provide the organization with security policy and awareness program recommendations to enhance the organization security practice. Furthermore, the importance of this research is its contribution to enhance the organization security level. In order to achieve the research objectives, a mixture of different methodologies has been used to collect the fundamental data includes: survey questionnaires, interviews, and field observation. For the data collection process to success, a number of strategies have been used in each method to ensure achieving the most benefits of each the used method. These methods satisfied in collecting the essential primary data. Furthermore, a number of literatures were reviewed in order to understand the research subject further. Based on the collected data, a number of analysis methods have been used to draw a conclusion of the organizational security level where the findings illustrate the needs for security policies and awareness programs in order to enhance the organization security level. Thus, a number of security policies and awareness program recommendations have been established. The research findings and the provided recommendations can support the organization to enhance its security level as much as possible since no system is completely secure. Furthermore, this research presents valuable information about the organization current security level and provides recommendations to enhance this security level.
-
-
-
A Reverse MultipleChoice Based mLearning System
Authors: AbdelGhani Karkar, Indu Anand and Lamia DjoudiMobile learning can help in accelerating the students’ learning strengths and comprehension skills. Due to the immediacy and effectiveness of mobile learning, many mobile educational systems with diverse assessments techniques have been proposed. However, we observe a common limitation in existing assessments techniques, such as, the learner cannot correlate question and answer choices or freely adapt answers in a given multiple-choice question, often resulting in incorrect assessment grade. In the current work, we present a reverse multiple-choice mobile learning system that is based on knowledge acquisition. Using a knowledge base, a set of answer choices will be created for a multiple-choice question. For each of one or more of the incorrect answers, a follow-up query is generated for which the incorrect answer is correct. The goal is to find, via a query, an optimal association between the incorrect answers and the correct answer. The user studies of the proposed system demonstrated its efficiency and effectiveness.Keywords—Mobile Learning, Knowledge Acquisition, Multiple Choice, Expert Systems.I. IntroductionNowadays, mobile devices opened a new horizon for learning. As most people own handheld private portable smart phones, this has become main medium of connectivity and reexamination. Using smart-devices for learning is beneficial and attractive as the learner can access educational materials and access assessment exercises at any time. However, existing assessment technique such as multiple-choice technique [1] does not enable a learner to modify answers in the given multiple-choice question resulting inaccurate assessment grade. For this reason, the attested research work was to extend the former multiple-answers question technique with the ability of selecting wrong answers in mobile learning scope. Thus, extra-assessments will be carried out to assess the knowledge of the learner using the selected wrong answer. II. Review of the LiteratureSeveral mobile learning applications have been proposed due to their ability in providing more engaging and successful learning environments [2]. Chen et al. [3] proposed a mobile learning system that provides multistage guiding mechanisms when the student selects wrong answer in a multiple-choice question. The proposed system enhanced the learning achievements of students and their learning motivation. Huang et al. [4] developed a mobile learning tool to improve learning the English language for foreign language (EFL) students. The tool uses 5-step vocabulary learning (FSVL) strategy. Thus, it employs the former multiple-choice questions in order to assess the learning of students. Koorsse et al. [5] proposed a mobile based system that uses two multiple-choice assessment methods. The assessment methods use self-regulated principles to support the learning of students in the secondary school of science and mathematics. As many mobile based educational systems have been proposed, adapting multiple-choice questions according to a selected wrong answer was not considered in previous mobile based educational systems. Hence, our system can be used to enhance the learning assessments of learners.III. The Proposed SystemOur proposed system provides educational content and uses a novel assessment technique based on reverse-multiple choice [6]. The system can be used in classroom to assess the learning of students. The proposed system covers: 1) presentation of educational content, 2) generation of multiple-choice based questions including their follow-up queries, and 3) performance analysis of the student. For the presentation of the content, we have created an educational depository that contains collection of educational stories. These stories are collected from diverse online ebook libraries such as MagicBlox library [7], BookRix [8], and others. For the multiple-choice questions, we start with the familiar multiple-choice format [1], which we call “Reverse Multiple-Choice Method” (RMCM). The question uses the power of wrong answer choices not just as “distractors,” but to extract information about students’ depth of learning from brief, machine gradable answers. RMCM question asks a student to weigh why a particular answer choice is incorrect, identify segment(s) of the query on which the answer turns, then change those segment(s) to make it correct. Indeed, the examiner must carefully select the answer choices for a multiple-choice query, but RMCM question databanks have lasting value and high re-usability; even having seen a question earlier, an examinee must answer it thoughtfully. The RMCM approach suits m-learning environments especially, since thinking comprises most effort and actual answers are brief. Eventually, for the performance analysis of students, we use the total number of correct answers done by the student to assess his/her performance. When a reverse multiple-choice option is employed, the grade will be computed according to the number of correct attempts achieved by the student. Thus, for every wrong attempt the performance is decreased by certain percentage.Bibliography[1] K. M. Scouller and M. Prosser, “Students’ experiences in studying for multiple choice question examinations,” Studies in Higher Education, vol. 19, no. 3, pp. 267-279, Jan. 1994.[2] K. Wilkinson and P. Barter, “Do mobile learning devices enhance learning in higher education anatomy classrooms?,” Journal of pedagogic development, vol. 6, no. 1, 2016.[3] C. H. Chen, G. Z. Liu, and G. J. Hwang, “Interaction between gaming and multistage guiding strategies on students’ field trip mobile learning performance and motivation,” British Journal of Educational Technology, vol. 47, no. 6, pp. 1032-1050, 2016.[4] C. S. Huang, S. J. Yang, T. H. Chiang, and A. Y. Su, “Effects of situated mobile learning approach on learning motivation and performance of EFL students,” Journal of Educational Technology & Society, vol. 19, no. 1, 2016.[5] M. Koorsse, W. Olivier, and J. Greyling, “Self-Regulated Mobile Learning and Assessment: An Evaluation of Assessment Interfaces,” Journal of Information Technology Education: Innovations in Practice, vol. 13, pp. 89-109, 2014.[6] I. M. Anand, “Reverse Multiple-Choice Based Clustering for Machine Learning and Knowledge Acquisition,” International Conference on Computational Science and Computational Intelligence (CSCI), vol. 1, p. 431, 2014.[7] “MagicBlox Children's Book Library.” Available: http://magicblox.com/. Accessed: 20-Oct-2017.[8] “BookRix.” Available: https://www.bookrix.com/. Accessed: 20-Oct-2017.
-
-
-
Tackling item coldstart in recommender systems using word embedding
By Manoj ReddyWe live in the digital age where most of our activities and services are carried out over the internet. Items such as music, movies, products etc. are being consumed over the web by millions of users. The number of such items is large enough that it is impossible for a user to experience everything. This is where recommender systems come into play. Recommender systems are employed to play a crucial role of filtering and ranking items to each user based on their individual preferences. Recommender systems essentially assist the user in making decisions to overcome the problem of information overload. These systems are responsible for understanding a user's interests and inferring their needs over time. Recommender systems are widely employed across the web and in many cases, are the core aspect of a business. For example, on Quora, a question-answering website, the entire interface relies on the recommender system for deciding what content to display to the user. The content ranges from homepage question ranking, topics recommendation and answer ranking. The goal of a recommender system is to assist users in selecting items based on their personal interest. By doing so, it also increases the number of transactions thereby creating a win-win situation for both the end users and the web service. Recommender systems is a relatively new and exciting field that promises a huge potential in the future. It has originated from the field of information retrieval and search engines where the task was: given a query retrieve the most relevant documents. In the recommender system domain, the user should be able to discover items that he/she would not have been able to search for directly. One main challenge in recommender systems is cold-start. It is defined as the situation when a new user/item joins the system. We are interested in item cold start and in this case the recommender system needs to learn about the new item and decide which users should it recommend to. In this work, we propose a new approach to tackle the cold-start problem in recommender system using word embeddings. Word embeddings are semantic representations of the words in a mathematical form like vectors. Embeddings are very useful since they are able to capture the semantic relationship between words in the vocabulary. There are various methods to generate such a mapping which include: neural networks, dimensionality reduction on word co-occurrence matrix, probabilistic models etc. The underlying concept behind these approaches is that words that share common contexts in the corpus have close proximity in the semantic space. Word2vec is a popular technique by Mikolov et al. that has gained tremendous popularity in the natural language processing domain. They came up with two versions, namely: continuous skip-gram and continuous bag-of-words model (CBOW). They were able to overcome the problem of sparsity in text and demonstrate its effectiveness in a wide range of NLP tasks. Our dataset is based on a popular website called Delicious which allows users to store, share and discover bookmarks on the web. For each bookmark, users are able to generate tags that provide meta information about the page such as the topics discussed, important entities. For example, a website about research might contain tags like science, biology, experiment. The problem now becomes: Given a new bookmark with tags, compute which users to recommend this new bookmark. For item cold start situation, a popular technique is to use content based approaches and find items similar to the new item. The new item can then be recommended to users of the computed similar items. In this paper, we propose a method to compute similar items using word embeddings of the tags present for each bookmark. Our methodology involves representing each bookmark as a vector by combining the word embeddings of its tags. There are various possible aggregation mechanisms and we chose to use the average in our experiments since it is intuitive and easy to compute. The similarity between two bookmarks can be computed by taking the cosine similarity between their corresponding embedding vectors. The total number of bookmarks in the dataset is around 70,000 with around 54,000 tags. The embeddings are obtained from the GloVe project where the training is performed on Wikipedia data based on aggregated global word-word co-occurrence statistics. The vocabulary of these embeddings are fairly large, containing about 400 k words and each word is stored in the form of a 300-dimension vector. The results were evaluated manually and the results look promising. We found that the bookmarks recommended were highly relevant in terms of the topics being discussed. Some example topics being discussed in the bookmarks were: social media analysis, movie reviews, vacation planner, web development etc. The reason that embeddings perform well is that they are to capture the semantic information of bookmarks using tags which is useful in cold start situations. Our future work would involve using other aggregation combinations such as weighting the tags differently based on their importance. A more suitable method of evaluation would be to measure the feedback (ratings/engagement) from users in a live recommender system and compare along with other approaches. In this work, we demonstrate the feasibility of using word embeddings to tackle the item cold start problem in recommender systems. This is an important problem that can deliver a positive impact in improving the performance of recommender systems.
-
-
-
Analyze Unstructured Data Patterns for Conceptual Representation
Authors: Aboubakr Aqle, Dena Al-Thani and Ali JaouaOnline news media provides aggregated news and stories from different sources all over the world and up-to-date news coverage. The main goal of this study is to find a solution that is considered as a homogeneous source for the news and to represent the news in a new conceptual framework. Furthermore, the user can easily and quickly find different updated news in a fast way through the designed interface. The Mobile App implementation is based on modeling the multi-level conceptual analysis frame. Discovering main concepts of any domain is captured from the hidden unstructured data that are analyzed by the proposed solution. Concepts are discovered through analyzing data patterns to be structured into a tree-based interface for easy navigation for the end user. Our final experiment results show that analyzing the news before displaying to the end-user and restructuring the final output in a conceptual multilevel structure produces a new display frame for the end user to find the related information of interest.
-
-
-
A Machine Learning Approach for Detecting Mental Stress Based on Biomedical Signal Processing
Authors: Sami Elzeiny and Dr. Marwa QaraqeMental stress occurs when a person perceives abnormal demands or pressures that influence the sense of well-being. These high demands sometimes exceed human capabilities to cope with. Stressors such as workload, inflexible working hours, financial problem, or handling more than one task can cause work-related stress which in turn leads to less productive employees. Lost productivity costs global economy approximately US $ 1 trillion per year [1]. A survey conducted among 7000 workers in U.S. found that 42% had left their job to escape the stressful work environment [2]. Some people can handle stress better than others, therefore the stress symptoms can vary. Stress symptoms can affect the human body and make him down both physically and mentally. Hopelessness, anxiety, and depression are examples of emotional symptoms, while headaches, over-eating, sweaty hands, and dryness of mouth are physical signs of the stress. There are also behavioral cues for stress like aggression, social withdrawal, and loss of concentration [3]. When the thread is perceived, a survival mechanism called «fight or flight response» will be activated to help the human body to adapt the situation quickly. In this mechanism, the central nervous system (CNS) asks adrenal glands to release cortisol and adrenaline hormones, which boost glucose levels in the bloodstream, quicken the heartbeat, and raise blood pressure. If CNS does not succeed to return to normal state, the body reaction will continue which in turn increases the possibility of having heart stroke or attack [4]. There are several techniques used to explore physiological and physical stress measures, for example, electrocardiogram (ECG) measures the heart»s electrical activity, electroencephalography (EEG) records the brain»s electrical activity, electrodermal activity (EDA) or galvanic skin response (GSR) measures the continuous variations in the skin»s electrical characteristics, electromyography (EMG) records electrical activity in muscles, photoplethysmography (PPG) estimates the skin blood flow, and Infrared (IR) tracks eye activities. On the other hand, prolonged ongoing worrying can lead to chronic stress. This type of stress is most harmful and has been linked to cancer, and cardiovascular disease (CVD) [5]. Therefore, several approaches were proposed in an attempt to identify stress triggers and amount of stress. Some of these methods used instruments such as questionnaires to assess affective states, but these techniques usually suffer from memory and response biases. However, stress detection via the analysis of various bio-signals are deemed more valuable and thus have been the focus of modern day research. In particular, various bio-signals are collected from participates. These bio-signals are then subjected to advanced signal processing algorithms in an attempt to extract salient features for classification by machine learning algorithms. In our project, we are interested in exploring new machine learning techniques which wearable devices to record various bio-signals. The goal is the development of an automatic stress detection system based on the analysis of bio-signals through the use of signal processing and machine learning. The outcome of this research will allow users to be notified when their bodies enter a state of unhealthily stress levels so that they may take preventative action to avoid unnecessary consequences.
-
-
-
Inhomogeneous Underwater Visible Light Communications: Performance Evaluation
Authors: Noha Hassan Anous, Mohamed Abdallah and Khalid QaraqeIn this work, an underwater visible light communications (VLC) vertical link performance is evaluated. Underwater environment is known of its inhomogeneous nature versus depth. A mathematical model for the received power (Pr) is derived and bit error rates (BER) are computed under different underwater conditions. A numerical example is given for illustration of the deduced model. Our results suggest that an optimum transmitter-receiver separation exists, where BER is minimum at a certain transmission orientation.
-
-
-
Framework of experiential learning to enhance student engineering skill
Authors: Fadi Ghemri, Houssem Fadi and Abdelaziz BourasIn this research work, we propose a framework of experiential learning to enhance student work skills and experience. This research main to contribute to the development and expansion of local industry, through the conduct of long-term fundamental research that contributes to the science base and understanding needs of national economy through industrial by providing an adapted method, enhance the teaching contents and pedagogical organization to be more accurate and adapted to the competency requirements of local employers.
-
-
-
QCRI's Live Speech Translation System
Authors: Fahim Dalvi, Yifan Zhang, Sameer Khurana, Nadir Durrani, Hassan Sajjad, Ahmed Abdelali, Hamdy Mubarak, Ahmed Ali and Stephan VogelIn this work, we present Qatar Computing Research Institute»s live speech translation system. Our system works with both Arabic and English. It is designed using an array of modern web technologies to capture speech in real time, and transcribe and translate it using state-of-the-art Automatic Speech Recognition (ASR) and Machine Translation (MT) systems. The platform is designed to be useful in a wide variety of situations like lectures, talks and meetings. It is often the case in the Middle East that audiences in talks understand either Arabic or English alone. This system enables the speaker to talk in either language, and the audience to understand what is being spoken even if they are not bilingual.The system consists of three primary modules, i) a Web application, ii) ASR system, iii) and a statistical/neural MT system. The three modules are optimized to work jointly and process the speech at a real-time factor close to one - which means that the systems are optimized to keep up with the speaker and provide the results with a short delay, comparable to what we observe in (human) interpretation. The real-time factor for the entire pipeline is 1.18. The Web application is based on the standard HTML5 WebAudio application programming interface. It captures speech input from a microphone on the user»s device and transmits it to the backend servers for processing. The servers send back the transcriptions and translations of the speech, which is then displayed to the user. Our platform features a way to instantly broadcast live sessions for anyone to see the transcriptions and translations of a session in real-time without being physically present at the speaker»s location. The ASR system is based on KALDI, a state-of-the-art toolkit for speech recognition. We use a combination of time delay neural networks (TDNN) and long-short term memory neural network (LSTM) to ensure real time transcription of the incoming speech while ensuring high quality output. The Arabic and English systems have average word error rates of 23% and 9.7% respectively. The Arabic system consists of the following components: i) a character based lexicon of size 900K; the lexicon maps words to sound units to learn acoustic representation, ii) 40 dimensional high-resolution features extracted for each speech frame to digitize the audio signal, iii) a 100-dimensional i-vectors for each frame to facilitate speaker adaptation, iv) TDNN acoustic models, and v) Tri-gram language model trained using 110 M words, and restricted to 900 K vocabulary.The MT system has two choices for the backend – a statistical phrase-based system and a neural MT system. Our phrase-based system is trained with Moses, a state-of-the-art statistical MT framework, and the neural-based systems is trained with Nematus, a state-of-the-art neural MT framework. We use Modified Moore-Lewis filtering to select the best subset of the available data to train our phrase-based system more efficiently. In order to speed up the translation even further, we prune the language models backing the phrase-based system, ignoring knowledge that is not frequently used. On the other hand, our neural-based system MT system trained on all the available data as its training scales linearly with the amount of data unlike phrase-based systems. Our Neural MT system is roughly 3–5% better on the BLEU scale, a standard measure for computing the quality of translations. However, the existing neural MT decoders are slower than the phrase-based decoders translating 9.5 tokens/second versus 24 tokens/second. The trade-off between efficiency and accuracy barred us from picking only one final system. By enabling both technologies we allow the trade-off between quality and efficiency and leave it up to the user to decide whether they prefer fast or accurate system.Our system has been successfully demonstrated locally and globally at several venues like Al Jazeera, MIT, BBC and TII. The state-of-the-art technologies backing the platform for transcription and translation are also available independently and can be integrated seamlessly into any external platform. The Speech Translation system is publicly available at http://st.qcri.org/demos/livetranslation.
-
-
-
Humans and bots in controversial environments: A closer look at their interactions
Authors: Reham Al Tamime, Richard Giordano and Wendy HallWikipedia is the most influential popular information source on the Internet, and is ranked as the fifth most visited website [1] (Alexa, 2017). The English-language Wikipedia is a prominent source of online health information compared to other providers such as MedlinePlus and NHS Direct (Laurent and Vickers, 2009). Wikipedia has challenged the way that traditional medical encyclopaedia knowledge is built by creating an open sociotechnical environment that allows non-domain experts to contribute to its articles. Also, this sociotechnical environment allows bots – computer scripts that automatically handle repetitive and mundane tasks – to work with humans to develop, improve, maintain and contest information in Wikipedia articles. The contestation in Wikipedia is unavoidable as a consequence of its open nature, which means that it accepts contradictory views on a topic and involves controversies. The objective of this research is to understand the impact of controversy on the relationship between humans and bots in environments that are managed by the crowd. This study analyses all the articles under the WikiProject Medicine, and includes 36,850 Wikipedia articles. Medical articles and their editing history have been harvested from the Wikipedia API covering all edits from 2001 till 2016. The data includes the revisions ID, username, timestamp, and comment. The articles under the WikiProject Medicine contain 6,220,413 edits and around 1,285,936 human and bot editors. To measure controversies, we studied reverted and undone edits. A revert on Wikipedia occurs when an editor, whether human or bot, restores the article to an earlier version after another editor's contribution. Undone edits are reverted single edits from the history of a page, without simultaneously undoing all constructive changes that have been made since the previous edit. Reverted and undone edits that occur systematically indicate controversy and conflict (Tsvetkova et al., 2017). To measure the relationship between humans and bots, we focused on both positive and negative relationships. A positive relationship is when an editor, such as a human, endorses another editor, such as a bot, by reverting or undoing a recent edit to the other editor's contribution. A negative relationship is when an editor, such as human, discards another editor, such as a bot, by reverting or undoing the other editor's contribution. Our results show that there is a relationship between controversial articles and the development of a positive relationship between humans and bots. The results demonstrate that bots and humans could behave differently in controversial environments. The study highlights some of the important features of building health-related knowledge on Wikipedia. The contribution of this work is to build on previous theories that consider web-based systems as social machines. These theories recognise the joint contribution of humans and machines to activities on the web, but assume a very static type of relationships that is not sensitive to the environment in which humans and machines operate in. Understanding the interactions between humans and bots is crucial for designing crowdsourced environments that are integrative to their human and non-human population. We discuss how our findings can help set up future research directions and outline important implications for research on crowd. References: Laurent, M. R & Vickers, T. J (2009) ‘Seeking Health Information Online: Does Wikipedia Matter.?’ J Am Med Inform Assoc, 16(4), 471-479. Tsvetkova, M, García-Gavilanes, R, Floridi, L and Yasseri, T. (2017) ‘Even good bots fight: The case of Wikipedia.’ PLoS One, 12(2): e0171774. [1] https://www.alexa.com/topsites
-
-
-
Nonorthogonal Multiple Access for Visible Light Communications: Complementary Technology Enabling High Data Rate Services for 5G Networks
Authors: Galymzhan Nauryzbayev and Mohamed AbdallahIntroduction: The last decades have been remarkably noticed by an explosive growth of myriad applications in wireless communications, which become an inevitable part of everyday life. Apparently, such services can be characterized by high data content and consequently require high data rates. With respect to the fundamentals of information theory, the data rate at which information can be delivered to the receiver over a wireless channel is strongly linked to the signal-to-noise-ratio (SNR) of the information signal and the corresponding channel bandwidth. These achievements in providing high data rates were mainly obtained at the price of substantially increased bandwidth (Hz) and energy (joules) resources. As a result, a significant spectrum scarcity became a noticeable burden. Moreover, it was shown that exploiting additional RF bandwidth is not anymore a viable solution to meet this high demand for wireless applications, e.g. 5G systems are assumed to provide a 1 Gbps cell-edge data rate and to support data rates of between 10 Gbps and 50 Gbps. To satisfy this demand for more data rates, optical wireless communication (OWC) has been considered as a promising research area. One of these complementary technologies is visible light communication (VLC) technology that has several advantages such as huge non-occupied spectrum, immunity to electromagnetic interference, low infrastructural expenditures, etc. VLC has gained considerable attention as an effective means of transferring data at high rates over short distances, e.g. indoor communications. A typical VLC system consists of a source (light emitting diodes, LEDs) that converts the electrical signal to an optical signal, and a receiver that converts the optical power into electrical current using detectors (photodiodes, PDs). Light beams propagating through the medium deliver the information from the transmitter to the receiver. To satisfy current and future demands for increasing high data rates, the research society has focused on non-orthogonal multiple access (NOMA) regarded as one of the emerging wireless technologies is expected to play an important role in the 5G systems due to its ability to serve many more users utilizing non-orthogonal resource allocation compared to the traditional orthogonal multiple access (OMA) schemes. Therefore, NOMA has been shown to be a promising instrument to improve the spectral efficiency of modern communication systems in combination with other existing technologies. Purpose: This work aims to investigate the performance of the spectrally and energy efficient orthogonal frequency-division multiplexing (SEE-OFDM) based VLC system combined with NOMA approach. We model the system consisting of one transmitter and two receivers located in the indoor environment. Methods: First, we specify the users’ location and estimate the channel state information to determine so-called “near” and “far” users to implement the NOMA approach. Moreover, we assume that the “near” user exploits successive interference cancellation algorithm for interference decoding while the other user treats the interfering signal as noise. Next, we consider two coefficients defining the power portions allocated for the receivers. Then we apply an algorithm to successively demodulate the transmitted signals since each user observes a superposition of the signals designated for both receivers with a predefined target bit-error rate (BER) threshold (10-4). Once the target BER is achieved, we need to estimate the data rate obtainable for a certain set of the power-allocating coefficients. Results: The results show that the indoor SEE-OFDM-based VLC network can be efficiently combined with NOMA, and the target BER can be achieved by both receivers. Moreover, the BER of the “far” user is better since more power is allocated for this user. Next, we evaluate the achievable data rate and compare the results with the ones attainable for the OMA. It can be noticed that the NOMA approach outperforms the results related to the OMA. Conclusions: We analyzed the performance for the two-user indoor VLC network scenario deployed with SEE-OFDM and NOMA techniques. It was shown that recently appeared SEE-OFDM technique can be effectively exploited along with non-orthogonal approach to achieve more spectral efficiency promised by the use of NOMA. Both receivers were shown to be able to achieve the target BER within a narrow range of the power-allocating coefficients. Finally, for the defined system parameters, it was demonstrated that the NOMA approach achieves higher data rates compared to the OMA scenario.
-
-
-
Deep Learning for Traffic Analytics Application FIFA2022
Authors: Abdelkader Baggag, Abdulaziz Yousuf Al-Homaid, Tahar Zanouda and Michael AupetitAs urban data keeps getting bigger, deep learning is coming to play a key role in providing big data predictive analytics solutions. We are interested in developing a new generation of deep learning based computational technologies that predict traffic congestion and crowd management. In this work, we are mainly interested in efficiently predicting future traffic with high accuracy. The proposed deep learning solution allows the revealing of the latent (hidden) structure common to different cities in terms of dynamics. The data driven insights of traffic analytics will help shareholders, e.g., security forces, stadium management teams, and travel agencies, to take fast and reliable decisions to deliver the best possible experience for visitors. Current traffic data sources in Qatar are incomplete as sensors are not yet permanently deployed for data collection.The following topics are being addressed:Predictive Crowd and Vehicles Traffic Analytics: Forecasting the flow of crowds and vehicles is of great importance to traffic management, risk assessment and public safety. It is affected by many complex factors, including spatial and temporal dependencies, infrastructure constraints and external conditions (e.g. weather and events). If one can predict the flow of crowds and vehicles in a region, tragedies can be mitigated or prevented by utilizing emergency mechanisms, such as conducting traffic control, sending out warnings, signaling diversion routes or evacuating people, in advance. We propose a deep-learning-based approach to collectively forecast the flow of crowds and vehicles. Deep models, such as Deep-Neural-Networks, are currently the best data-driven techniques to handle heterogeneous data and to discover and predict complex data patterns such as traffic congestion and crowd movements. We will focus in particular on predicting inflow and outflow of crowds or vehicles to and from important areas, tracking the transitions between these regions. We will study different deep architectures to increase the accuracy of the predictive model, and explore ways on how to integrate spatio-temporal information into these models. We will also study how deep models can be re-used without retraining to handle new data and better scale to large data sets. What-If Scenarios Modeling: Understanding how congestion or overcrowd at one location can cause ripples throughout a transportation network is vital to pinpoint traffic bottlenecks for congestion mitigation or emergency response preparation. We will use predictive modeling to simulate different states of the transportation network enabling the stakeholder to test different hypotheses in advance. We will use the theory of multi-layer networks to model and then simulate the complex relationship between different but coexisting types of flows (crowd, vehicles) and infrastructures (roads, railways, crossings, passageways, squares…). We will propose a visual analytic platform that will provide necessary visual handles to generate different cases, navigate through different scenarios, and identify potential bottleneck, weak points and resilient routes. This visualization platform connected to the real-time predictive analytic platform will allow supporting stakeholder decision by automatically matching the current situation to the already explored scenarios and possible emergency plans. Safety and Evacuation Planning based on Resilience Analytics: Determining the best route to clear congestion or overcrowded areas, or new routes to divert traffic and people from such areas is crucial to maintain high security and safety levels. The visual analytic platform and the predictive model will enable the test and set up of safety and evacuation plans to be applied in case of upcoming emergency as detected by the predictive analytic platform. Overall, the proposed approach is independent of the type of flows, i.e., vehicles or people, or infrastructures, as long as proper sensors (magnetic loops, video camera, GPS tracking, etc.) provide relevant data about these flows (number of people or vehicles per time unit along a route of some layer of the transportation network). The proposed data-driven learning models are efficient, and they adapt to the specificities of the type of flows, by updating the relevant parameters during the training phase.
-
-
-
Virtual Reality Glove for Falconry
More LessFalconry hunting is an Arabic traditional sport and has great significance in Qatari society, as it is a major part of their culture. Falconry is about hunting small birds or animals, by using different types of falcons. The falconry in virtual reality (VR) can preserve Qatari culture by making the sport easy to access for all stream of people. The main idea behind this project is to educate and experience real time falconry to the people living in Qatar and the visitors for 2022 football world cup. The proposed design in our project could also help the professional falconers to use and learn the VR technology that can make them a better handler. Moreover, The rapid development in technologies related to Virtual Reality has made imitation of real world in real time equally possible. The VR environment is possible to be integrated with software mediums such as Unity3D, but to realize the real time feel, weight, pressure, movement, and vibration of any kind in the VR is hard and still under process. There are also various new technologies in this field such as haptics, but these technologies are expensive and there is no definite hardware that actually mimics movement of the falcon when it stands on the hand The main hardware design is a glove that can be detected virtually and can detect movement of different types of falcon on the player's hand. The designs proposed in our project will have extensive real time feel of the falcon on the user's hand using various available hardware components, which are cheap and easy to maintain. The design of our gloves paves way to further enhancement of movement realization in VR for other form of sports, medicine etc., The major requirements for the game of falconry where obtained from Qatari Society of Al-Gannas with whom we have collaboration for this project.
-
-
-
Enabling Efficient Secure Multiparty Computation Development in ANSI C
Authors: Ahmad Musleh, Soha Hussein, Khaled M. Khan and Qutaibah M. MalluhiSecure Multi-Party Computation (SMPC) enables parties to compute a pub- lic function over private inputs. A classical example is the millionaires problem, where two millionaires want to figure out who is wealthier without revealing their actual wealth to each other. The insight gained from the secure computa- tion is nothing more than what is revealed by the output (in this case, who was wealthier but not the actual value of the wealth). Other applications of secure computation include secure voting, on-line bidding and privacy-preserving cloud computations, to name a few. Technological advancements are making secure computations practical, and recent optimizations have made dramatic improve- ments on their performance. However, there is still a need for effective tools that facilitate the development of SMPC applications using standard and famil- iar programming languages and techniques, without requiring the involvement of security experts with special training and background. This work addresses the latter problem by enabling SMPC application de- velopment through programs (or repurposing existing code) written in a stan- dard programming language such as ANSI C. Several high-level language (HLL) platforms have been proposed to enable secure computation such as Obliv-C [1], ObliVM [2] and Frigate [3] These platforms utilize a variation of Yao's garbled circuits [4] in order to evaluate the program securely. The source code written for these frameworks is then converted into a lower-level intermediate language that utilizes garbled circuits for program evaluation. Garbled Circuits have one party (garbler) who compiles the program that the other party (evaluator) runs, and the communication between the two parties happens through oblivi- ous transfer. Garbled circuits allow two parties to do this evaluation without a need for a trusted third party. These frameworks have two common characteristics: either define a new language [2] or make a restricted extension of a current language [1]. This is somewhat prohibitive as it requires the programmer to have a sufficient under- standing of SMPCs related constructs and semantics. This process is error-prone and time-consuming for the programmer. The other characteristic is that they use combinational circuits, which often require creating and materializing the entire circuit (circuit size may be huge) before evaluation. This introduces a restriction on the program being written. TinyGarble [5], however, is a secure two-party computation framework that is based on sequential circuits. Compared with the frameworks mentioned earlier, TinyGarble outperforms them by orders of magnitude. We are developing a framework that can automatically convert a HLL pro- gram (in this case ANSI C) into an hardware definition language, which is then evaluated securely. The benefit of having such transformation is that it does not require knowledge of unfamiliar SMPC constructs and semantics, and per- forms the computation in a much more efficient manner. We are combining the efficiency of sequential circuits for computation as well as the expressiveness of a HLL like ANSI C to be able to develop a secure computation framework that is expected to be effective and efficient. Our proposed approach is two-fold: first, it offers a separation of concern between the function of computation, written in C, and a secure computation policy to be enforced. This leaves the original source code unchanged, and the programmer is only required to specify a policy file where he/she specifies the function/variables which need secure computations. Secondly, it leverages the current state-of-the-art framework to generate sequential circuits. The idea is to covert the original source code to Verilog (a Hardware Definition Language) as this can then be transformed into standard circuit description which TinyGarble [5] would run. This will enable us to leverage TinyGarbles efficient sequential circuits. The result would be having the best of both worlds where we have HLL that would be converted and evaluated as a sequential circuit. References [1] S. Zahur and D. Evans, “Obliv-c: A language for extensible data-oblivious computation,” IACR Cryptology ePrint Archive, vol. 2015, p. 1153, 2015. [2] C. Liu, X. S. Wang, K. Nayak, Y. Huang, and E. Shi, “Oblivm: A pro- gramming framework for secure computation,” in 2015 IEEE Symposium on Security and Privacy, SP 2015, San Jose, CA, USA, May 17-21, 2015, pp. 359–376, 2015. [3] B. Mood, D. Gupta, H. Carter, K. R. B. Butler, and P. Traynor, “Frigate: A validated, extensible, and efficient compiler and interpreter for secure com- putation,” in IEEE European Symposium on Security and Privacy, EuroS&P 2016, Saarbrúcken, Germany, March 21-24, 2016, pp. 112–127, 2016. [4] A. C. Yao, “Protocols for secure computations (extended abstract),” in 23rd Annual Symposium on Foundations of Computer Science, Chicago, Illinois, USA, 3-5 November 1982, pp. 160–164, 1982. [5] E. M. Songhori, S. U. Hussain, A. Sadeghi, T. Schneider, and F. Koushanfar, “Tinygarble: Highly compressed and scalable sequential garbled circuits,” in 2015 IEEE Symposium on Security and Privacy, SP 2015, San Jose, CA, USA, May 17-21, 2015, pp. 411–428, 2015.
-
-
-
Demonstration of DRS: Dynamic Resource Scheduler for Distributed Stream Processing
By Yin YangWe propose to demonstrate DRS, a novel dynamic resource scheduler module for distributed stream processing engines (SPEs). The main idea is to model the system response time as a function of input characteristics, including the volume, velocity, and distribution statistics of the streaming data. Based on this model, DRS decides on the amount of resource to allocate to each streaming operator to the system, so that (i) the system satisfies real-time response constraints at all times and (ii) total resource consumption is minimized. DRS is a key component to enable elasticity in a distributed SPE. DRS is a major outcome of QNRF/NPRP project titled «Real-Time Analytics over Sports Video Streams». As the title suggests, the goal of this project is to analyze sports (especially soccer) videos in real time, using distributed computing techniques. DRS fits the big picture of the project, as it enables dynamic provisioning of computational resources in response to changing data distribution in the input sports video streams. For instance, consider player detection based on region proposals, e.g., using Faster R-CNN. Even though the frame rate of the soccer video stays constant, the number of region proposals can vary drastically and unpredictably (e.g., in one frame there is only one player, and in the next frame there can be all 22 players). Consequently, the workload of the convolutional neural network that performs the detection for each region proposal varies over time. DRS ensures that there are also sufficient resources (e.g., GPUs) for processing the video in real time at any given time point; meanwhile, it avoids over-provisioning by accurately predicting the amount of resource needed. The demo will include both a poster, a video, and a live, on-site demo using a laptop computer connected to a cluster of remote machines. We will demonstrate to the audience how DRS works, when does it change resource allocation plan, how it executes the new allocation, and the underlying model of DRS. Acknowledgement: This publication was made possible by NPRP grant NPRP9-466-1-103 from the Qatar National Research Fund (a member of Qatar Foundation). The findings achieved herein are solely the responsibility of the author[s].
-
-
-
Integration of Multisensor data and Deep Learning for realtime Occupancy Detection for Building Environment Control Strategies
Authors: Dabeeruddin Syed and Amine BermakOne of the most prominent areas of energy consumption in residential units is for heating, ventilation and air-conditioning (HVAC) systems. The conventional systems for HVAC depend on the wired thermostats that are deployed at fixed locations and hence, are not convenient and do not respond to the dynamic nature of the thermal envelope of the buildings. Moreover, it is important to note that the distribution of the spatial temperature is not uniform. The current environment control strategies are based on the maximum occupancy numbers for the building. But there are always certain areas of a building which are used less frequently and are cooled needlessly. Having the real-time occupancy data and mining on it to predict the occupancy patterns of the building will help in energy effective strategy development for the regulation of HVAC systems through a central controller. In this work, we have deployed a network of multiple wireless sensors (humidity, temperature, CO2 sensors etc.), computational elements (in our case, a raspberry pi, to make it cost effective) and camera network with an aim to integrate the data from the multiple sensors in a large multifunction building. The sensors are deployed at multiple locations in such a way that the non-uniform spatial temperature distribution is overcome, and these sensors capture the various environmental conditions at a temporal and much finer spatial granularity. The pi camera is connected to a raspberry pi which is fixed at an elevation. The detection is performed using the OpenCV library and the python programming. This system can detect the occupancy with an accuracy of up to 90%. For occupancy detection and counter, a linear SVM is trained sampling on positive and negative images and the evaluation on test images or video feed makes use of non-maximum suppression (NMS Algorithm) to ignore redundant, overlapping HOG (Histogram Oriented Gradient) boxes. The data collected by the sensors is sent to the central controller on which the video processing algorithm is also running. Using the multiple environmental factors data available to us, models are developed to predict the usage in the building. These models help us to define the control parameters for the HVAC systems in adaptive manner in such a way that these parameters not only help in reducing the energy used in a building, but also help to maintain the thermal comfort. The control parameters are then sent as IR signals to AC systems that are controlled by IR remotes or as wireless signals to AC systems controlled by wireless thermostats. In comparison to the conventional temperature controller, our system will avoid overcooling of areas to save energy and predict the occupancy in the buildings so that the temperature is brought within the comfort zone of humans before over-occupancy takes place. Our system also has benefits of using wireless sensors that operate on low power, but the tradeoff between the power and the communication frequency should be well maintained. Our system additionally has two features: firstly, it can provide the live video streaming for remote monitoring using a web browser for the user interface and secondly, sending automatic notifications as messages in case of anomalies like abnormally high temperatures or high carbon dioxide concentration in a room. These two features can be used as cost-effective replacement for traditional systems in the applications of CCTV, burglary systems respectively. Keywords: wireless sensors, Air conditioning, opencv, NMS Algorithm, Histogram Oriented Gradient, thermal comfort.
-
-
-
Challenges and Success in setting up 3D Printing Lab integrated with EMR and VNA in a Tertiary care hospital in Middle East
Authors: Zafar Iqbal, Sofia Ferreira, Avez Rizvi, Smail Morsi, Fareed Ahmed and Deepak KauraIn recent years’ 3D printing, has shown exponential growth in clinical medicine, research, and education (Carlos et al.). Imaging departments are at the center of 3D printing service delivery, efforts of establishing a 3D printing lab, and making it a unique contribution towards patient care (Kent Thielen et al.). Building a fully electronic medical record (EMR) integrated workflow to deliver 3D services offers unique advantages for clinicians. In Sidra Medicine, we have successfully tested the electronic process by generating 3D orders and delivering the printed models such as hearts, skulls, and mandibles. To facilitate clinicians and 3D printing lab staff, we developed an automatic workflow in our EMR and radiology information system (RIS). Clinicians use our Cerner EMR to order 3D printing services by selecting the available 3D printing orders for each modality i.e. MR, CT, and US. The order also allows them to add their requirements by filling out relevant Order entry fields (OEFs). 3D printing orders populate the RIS worklist for 3D lab staff to start, complete, and document the service process. Consultation with ordering clinicians and radiologists is also vital in 3D printing process, so we developed a message template for the communication between lab staff and clinicians, which also has the capability to attach 3D model PDFs. 3D Lab staff upload the models to our Vendor Neutral Archive (VNA) before completing, storing the models in the patient»s record. Building a 3D workflow in an existing EMR has potential benefits to facilitate the 3D service delivery process. It allows 3D printing to rank amongst other modalities important for patient care by living where all other clinical care orders reside. It also allows 3D Lab staff to document the process through quick communication.
-
-
-
ELearning for person with disabilities
More LessE-learning is the use of Internet technologies or computer-assisted instruction, to enhance and improve knowledge and performance, because knowledge is the basic right for a human right that should be accessible by everyone regardless of the status of they disabilities. E-learning technologies offer learners control over content, learning sequence, pace of learning, time, and often media, allowing them to tailor their experiences to meet their personal learning objectives. this paper explores how adaptive e-learning for person with disabilities focusing intellectual disabilities in the Higher Education (HE) can show the important of making the technology with digital content have accessible to student with disabilities instead of face-to-face education with respect to ‘electronic’ vs. ‘traditional’ learning methods, this way of adapting E-Learning can be considered as its competent substitute complement, and examine the current situation progress in Qatar HE, because ubiquitous technologies have become a positive force of transformation and a crucial element of any personal development/empowerment and institutional framework for inclusive development. Keywords: e-learning, person with disabilities, intellectual disabilities, and learning methods.
-
-
-
Proactive Automated Threat Intelligence Platform
More LessImagine you are sitting at a public coffee shop and using their free wifi on your work device. As long as you are connected to corporate VPN, you are well protected. The moment you go off, you are no longer protected. Your laptop is now open to being hacked and attacked at public wifi locations like airports, hotels, coffee shops, parks etc. My proposed solution involves an automated proactive cloud based threat intelligence platform that will not just monitor and detect threats in real time attacking you while at a public location but also when you are at home. The system works on Zero trust framework where there are no trusted networks or zones. Each system with an IP address has its own Intrusion Detection and Prevention System, combined with special localized malware analysis that is specifically targeting you.Most Anti Virus and anti malware companies, do not write their own signatures. Infact they buy them from smaller companies, my proposed solution will analyze malware targeted at you specifically and create a defensive signature within minutes to neutralize and eradicate threats against you within an hour across your entire infrastructure. There will be no need to wait 2–3 days for Anti virus and anti malware companies to come up with signatures and offer you protection.
-
-
-
A Passive RFID Sense Tag for Switchgear Thermal Monitoring in Power Grid
Authors: Bo Wang and Tsz-Ngai LinI. Background In power grid, circuit breakers in switchgears are often the last line of defense when big systems must be protected from faults [1], sudden switchgear failures could cause long outages, huge economic losses and even present threats to the public safety. Based on field experience, the major causes of switchgear failure are loose or corroded metal connections, degraded cable insulation and external agents (e.g. dust, water) [2]. Due to ohmic loss at these weak points, the causes of switchgear failure are always accompanied with an increased thermal signature over time. With continuous thermal monitoring inside the switchgear, adequate data can be collected to make timely failure predication and prevention, especially for equipment deployed in harsh environment. II. Objective This paper presents the design of a passive radio frequency identification (RFID) sense tag, which measures temperature at critical spots of the switchgear and wirelessly (EPC C1G2 standard) transmits the data to the base station for real-time analysis. Compared with infrared imaging [2], surface acoustic wave (SAW) sensing system or fiber bragg grating (FBG) sensing system [1][3], no cables for power and communication are necessary, which avoids potential side effects like arcing in the grid after the addition of the sensor. The use of standard CMOS process results in a cost effective solution and the proposed passive wireless sensor can be easily retrofitted to existing switchgears with simple bolted connections. III. Passive Tag Design and Measurement Results Fig. 1 shows the proposed passive tag with temperature sensing capability. The power management unit in the chip harvests the incoming RF wave (860 ∼ 960 MHz) and sustains all the other on-chip building blocks (sensor, clock, memory, baseband, modem). The energy harvesting efficiency is in the range of 15%∼25% based on the operating mode of the tag. With 10 uW system power, the effective reading distance of this tag is 4.5 m ∼ 6 m. The on-chip temperature sensor adopts PNP bipolar as the sensing device, which has a temperature sensitivity of ∼2 mV/oC [4]. By using a triple-slope analog-to-digital converter (A/D), temperature-sensitive voltages are digitized and transmitted back to the reader after modulation by the modulator. Because there's no battery or other energy sources on the device, the power consumption of the tag, especially the sensor should be in the order of sub-mA to maintain the tag sensitivity. In this work, a passive integrator instead of an active one is used in the A/D, where its caused nonlinearity error is compensated by adding an additional nonlinear current into the temperature signal. The overall power consumption of the sensor is 0.75 uW and achieves 10 bit sensing resolution (0.12oC/LSB) within 10 ms conversion time, corresponding to a resolution FoM of 1.08x102 pJ√K2, which is among the most energy-efficient embedded sensor designs. Fig.2(a) shows the micro-photo of the designed passive RFID sense tag. Fig.2(b) shows its ceramic package, which can be readily installed on the spot locations with bolted connection. By designing the antenna with an additional ground layer, this tag is able to work in switchgear with full metal enclosure [5]. The measured tag sensitivity is -12 dBm. By measuring and calibrating multiple samples, the overall sensing precision of the tag is +/-1.5oC, which is enough for switchgear thermal monitoring, as shown in Fig.3(a). Thanks to the designed on-chip supply protection circuit, the sensor performance does not degrade much with the reading power or reading distance (received power ∝ or ∝ 1/distance2), as shown in Fig.3(a).IV. Conclusion The combination of passive RFID tags with sensors enables a lot of new applications and it help to bring embedded intelligence to the legacy power grid. The designed passive sense tag is of low-cost and with robust sensing performance after optimizing the tag at the system level and the use of low-power circuit design techniques. By re-designing the tag package, it can also be used in other applications like cold supply chain or highly flammable goods monitoring. Acknowledgement This work is in collaboration with Land Semiconductor Ltd., Hangzhou China and thanks Mr. Qibin Zhu and Mr. Shengzhou Lin for helping the measurement. Reference [1] G.-M. Ma et al., “A Wireless and Passive Online Temperature Monitoring System for GIS Based on Surface-Acoustic-Wave Sensor,” IEEE Trans. on Power Delivery, vol.31, no.3, pp. 1270 - 1280, June 2016. [2] Top Five Switchgear Failure Causes, Netaworld. [Online]. Available: http://www.netaworld.org/sites /default/files/public/neta-journals/NWsu10-NoOutage-Genutis.pdf, accessed Oct. 2017. [3] Fundamentals of Fiber Bragg Grating (FBG) Optical Sensing, NI White Papers. [Online]. Available: http://www.ni.com/white-paper/11821/en/, accessed Oct. 2017. [4] B. Wang, M.-K. Law and A. Bermak, “A Precision CMOS Voltage Reference Exploiting Silicon Bandgap Narrowing Effect,” IEEE Trans. on Electron Devices, vol. 62, no.7, pp.2128-2135, July 2015. [5] Chong Ryol Park and Ki Hwan Eom, “RFID Label Tag Design for Metallic Surface Environments,” Sensors, vol. 11, no. 1, pp. 938 - 948, Jan. 2011.
-