- Home
- Conference Proceedings
- Qatar Foundation Annual Research Conference Proceedings
- Conference Proceeding
Qatar Foundation Annual Research Conference Proceedings Volume 2018 Issue 3
- Conference date: 19-20 Mar 2018
- Location: Qatar National Convention Center (QNCC), Doha, Qatar
- Volume number: 2018
- Published: 15 March 2018
1 - 20 of 77 results
-
-
Towards a multimodal classification of cultural heritage
Authors: Abdelhak Belhi and Abdelaziz BourasThe humanity has always learned from the previous experiences and thus for many reasons. The national heritage proves to be a great way to discover and access a nation's history. As a result, these priceless cultural items have a special attention and requires a special care. However, Since the wide adoption of new digital technologies, documenting, storing, and exhibiting cultural heritage assets became more affordable and reliable. These digital records are then used in several applications. Researchers saw the opportunity to use digital heritage recordings for virtual exhibitions, links discoveries and for long-term preservation. Unfortunately, there are many under looked cultural assets due to missing history or missing information (metadata). As a classic solution for labeling these assets, heritage institutions often refer to cultural heritage specialists. These institutions are often shipping their valuable assets to these specialists, and need to wait few months even years to hopefully get an answer. This can in fact have multiple risks such as the loss or damage of these valuable assets. This in fact is a big concern for heritage institutions all around the world. Recent studies are reporting that only 10 percent of the world heritage is exhibited in museums. The rest (90%) is deposited and stored in museum archives especially because of their damage or the lack of metadata. With a deep analysis of the current situation, our team did a survey of current state-of-the-art technologies that can overcome this problem. As a result, new machine learning and deep learning techniques such as Convolutional Neural Networks (CNN) are making a radical change in image and bigdata classification. In fact, all the big technology companies such as Google, Apple and Microsoft are pushing the use of these artificial techniques to explore their astronomic databases and repositories in order to better serve their users. In this contribution, we are presenting a classification approach aiming at playing the role of a digital cultural heritage expert using a machine learning model and deep learning techniques. The system has mainly two stages. The first stage which is the so-called “the learning stage” where the system receives as input a large dataset of labeled data. This data is mainly images of different cultural heritage assets organized in categories. This is a very critical step as the data must be very descriptive and coherent. The next stage is in fact the classification stage where the system receives an image of an unlabeled asset and then tries to extract the relevant visual features of the image such as shapes, edges, colors and fine details such as text. The system then analyses these features and predicts the missing metadata. These data can be the category, the year, the region etc. the first tests are actually giving promising results. Our team is aiming to further improve these results using a multimodal machine learning model. In fact, these models rely on multiple learning sources (text, videos, sound recordings, images) at the same time. The research progress shows that this technique is giving very accurate predictions.
-
-
-
Education/Industry Collaboration Modeling: An Ontological Approach
Authors: Houssem Gasmi and Abdelaziz BourasOne of the main suppliers of the workforce for the engineering industry and the economy in general is the higher education sector. The higher education sector is consistently being challenged by the fast evolving industry and hence it is under constant pressure to fulfill the industry's ever-changing needs. It needs to adapt its academic curricula to supply the industry with students who have update-to-date and relevant competencies. Nevertheless, the gap still exists between what education curricula offer and the skills that are actually needed by the industry. Therefore, it is crucial to find an efficient solution to bridge the gap between the two worlds. Bridging the gap helps the industry to cut the costs of training university graduates and assists in higher education advancement. In response to these issues, competency-based education was developed. It was first developed in the United States, in response to the growing criticisms towards traditional education that was seen as more and more disconnected from the societal evolutions, especially changes within the industry. Despite some criticisms, the competency-based education pedagogical approach has been employed by several western countries to improve their upper-secondary vocational curricula. In recent times, it started to be more and more adapted to the higher education, as a way to update and improve academic courses. In this research work, a semantic ontological model is presented to model the competencies in the domains of education and the industry. It illustrates the use of ontologies for three identified end users: Employers, Educators, and Students. Ontologies are best used to solve problems of interoperability between different domains, provide a shared understating of terms across domains and are helpful in avoiding the wasted effort when translating terminology between domains. They also provide opportunities for domain knowledge reuse in different contexts and applications. Ontologies can also act as unifying framework between software systems and eliminates interoperability and communication issues that are faced when trying to translate concepts between systems. The scope of the research work is to build an ontology representing the domain concepts and validate it by building competency models. The competencies from the domains of education and industry are defined and classified in the ontology. Then by using a set of logical rules and a semantic reasoner, we would be able to analyze the gap between the different education and industry profiles. We will propose different scenarios on how the developed ontology could be used to build competency models. This paper describes how ontologies could be a relevant tool for an initial analysis and focus on the assessment of the competencies needed by the engineering market as a case study. The research questions this work investigates are: 1) Are semantic web ontologies are the best solution to model the domain and analyze the gap between industry needs and higher education? 2) What ontology design approaches are more suitable for representing the competencies model? 3) What are the limitations of ontology modeling? Two main limitations are discussed: The Open World Assumption and the Semantic Reasoner limitations. This work is part of the Qatar Foundation NPRP Pro-Skima research project.
-
-
-
Framework of experiential learning to enhance student engineering skill
Authors: Fadi Ghemri, Abdelaziz Bouras and Houssem GasmiIn this research work, we propose a framework of experiential learning to enhance student work skills and experience. This research main to contribute to the development and expansion of local industry, through the conduct of long-term fundamental research that contributes to the science base and understanding needs of national economy, through industrial by providing an adapted method to enhance the teaching contents and pedagogical organization to be more accurate and responding to the competency requirements of local employers. A vocational approach is a complicated process for universities since it is necessary to take into account a multiplicity of variables to establish a compromise between companies requirement and students competencies acquired during the university training. Academics expert (Teachers, researchers) should be careful to design the curriculum to balance between theory and practice in order to respond to workplace requirement, to bridge the gap and adequately prepare the students for the market. Such complexity requires close and continuous collaboration between industry and academia to build innovative solutions and develop new skills and competencies. Higher educational institutions need to reflect such an evolution in their training curricula. Trained students will be able to tackle real-world challenges efficiently. Collaboration approaches at the undergraduate and graduate levels between industry and academia showing how such collaborations increased the efficiency and effectiveness of the hired graduates and increased their employability. In terms of competent graduates, the elaboration of a competence-oriented curriculum and the respective implementation and organization are crucial, this method is based on cooperative and oriented learning, this approach needs an exchange between those responsible of the content and the industrial, this exchange must lead to a mutual understanding needs. To implement this strategy in Qatar context, we associate various types of research tools; collecting data regarding a several ethnographic perspective, local economic data, observations of different surrounding universities and local companies, interviews with academics and professional experts, etc. We also initiate some meeting in university with industrial and academic experts; indeed, we organized lately two workshops, during the first one, representatives from companies (Ooredoo, QAPCO, Qatar Airways, and IBM. Etc.) and academic experts, underlined the competency needs in IT field, especially the cyber-security subfield. The experts attest that it is crucial to have a local qualified labor to respond to a large increase of digital-based-economy and to reduce the dependency of expatriate experts; they also highlighted the importance of collaboration between university and industrial through the integrating internship during the master course curriculum. The last workshop was focused on Alumni; we estimated that their opinion is highly important because they are in the right position to give feedback about their experience regarding the adequacy of their academic training in their current occupations and to identify possible challenges and gaps they may have faced when they joined their workplaces. During the session, the list of industry requirements that was the output of the second workshop was further discussed and refined with Alumni and our industry, all this action are in order to involve the industrial as stakeholders and engage them in our perspective and to build new cooperative learning curricula. The establishments of an ecosystem where different stakeholders from Qatar industry and academia help us to contribute and discuss the pedagogical evolution of local education, particularly in IT and computing education, such a dialog is important for the evolution of the pedagogical curricula to adequately prepare future graduates for the challenges of the real world. Elaboration of the curriculum together with professionals from the business side can guarantee the update of the education method within the content of the curriculum. Modular structure with involvement in a company and a permanent interaction with the business can facilitate together, with up to date teaching methods. The results of the workshop and the discussion explained in more detail in the proposed poster. This work is part of the Qatar Foundation NPRP Pro-Skima research project.
-
-
-
UAVbased flying IoT Gateway for Delay Tolerant Data Transfer
Authors: Hamid Menouar, Ahmed Abuzrara and Nour AlsahanMany statistics show the number of connected devices (Internet of Things; IoT) worldwide to grow drastically in the next few years, to exceed 30 billion just by 2020. All these devices need to be connected to the internet, to establish tow ways communication with the backend applications. The list of applications and services that can be enabled by using IoT can be endless, covering different areas such as smart cities, smart home, connected vehicles, intelligent transport systems, agriculture, air and weather monitoring, industry 4.0, etc. One of the fundamental requirements of an IoT device is to be connected and reachable at any time. But when we look at the different applications that run on top of IoT, many of them do not require a continuous connection to the internet. For example, the air monitoring IoT devices, normally they sense and report data only once every 15 to 60 minutes. Such devices would not require a continuous connection to the Internet, but only every 15 to 60 minutes. For such devices and use-cases, we propose to use a flying IoT gateway, that can come next to the sensor every e.g. 15 minutes, to take the data that has been collected by the sensor and carry it to the backend. In this contribution we present a prototype of a solution that uses unmanned aerial vehicles (UAVs), aka drones, to provide a delay tolerant data routing solution for IoT devices. In this solution, a drone flies over a set of deployed IoT devices, to retrieve the collected and stored data and then deliver it to the backed. Such a solution can be suitable for sensing devices that do not require a real-time communication, like traffic speed cams. Indeed, the speed cams can collect data and store it locally, until a drone comes to carry and transfer it to the backend.This solution helps does not only reduce the overall cost by eliminating the cost of the Internet connectivity at each IoT device, but it also reduce the security vulnerability as the devices are physically not connected to the internet all the time, nor directly.This work has been been conducted under the R&D project NPRP9-257-1-056 which is funded and supported by QNRF.
-
-
-
Measurement and Analysis of Bitcoin Transactions of Ransomware
Authors: Husam Basil Al Jawaheri, Mashael Al Sabah and Yazan BoshmafRecently, more than 100,000 cases for ransomware attacks were reported in the Middle East, Turkey and Africa region [2]. Ransomware is a malware category that limits the access of users to their files by encrypting them. This malware requires victims to pay in order to get access to the decryption keys. In order to remain anonymous, ransomware requires victims to pay through the Bitcoin network. However, due to an inherent weakness in Bitcoin's anonymity model, it is possible to link identities hidden behind Bitcoin addresses by analyzing the blockchain, Bitcoin's public ledger where all of the history of transactions is stored. In this work, we investigate the feasibility of linking users, as identities represented by Bitcoin's public addresses, to addresses owned by entities operating ransomware. To demonstrate how such linking is possible, we crawled BitcoinTalk, a famous forum for Bitcoin related discussions, and a subset of Twitter public datasets. Out of nearly 5B tweets and 1M forum pages, we found 4.2K and 41K unique online identities, respectively, along with their public personal information and Bitcoin addresses. Then we expanded these datasets of users by using closure analysis, where a Bitcoin address is used to identify a set of other addresses that are highly likely to be controlled by the same user. This allowed us to collect thousands more Bitcoin addresses for the users. By analyzing transactions in the blockchain, we were able to link 6 unique identities to different ransomware operators including CryptoWall [1] and WannaCry [3]. Moreover, in order to get insights into the economy and activity of these Ransomware addresses, we analyzed the money flow of these addresses along with the timestamps associated with transactions involving them. We observed that ransomware addresses were active from 2014 to 2017, with an average lifetime of nearly 62 days. While some addresses were only active during a certain year, others were operating for more than 3 years. We also observed that the revenue of these malware exceeds USD 6M for CryptoWall, and ranges from USD 3.8K to USD 700K for ransomware such as WannaCry and CryptoLocker, with an average number of transactions of nearly 52. One address associated with CryptoLocker ransomware also had a large amount of Bitcoins worth more than USD 34M at the time of writing. Finally, we believe that such type of analysis can potentially be used as a forensic tool to investigate ransomware attacks and possibly help authorities trace the roots of such malware. 1- «Ransom Cryptowall.» Symantec. June 14, 2014. Accessed November 01, 2017. https://www.symantec.com/security_response/writeup.jsp?docid = 2014-061923-2824-99.2- Varghese, Joseph. «Ransomware could be deadly, cyber security expert warns.» Gulf Times. May 05, 2017. Accessed November 01, 2017. http://www.gulf times.com/story/546937/Ransomware-could-be-deadly-cyber-security-expert-w.3- Woollaston, Victoria. «WannaCry ransomware: what is it and how to protect yourself.» WIRED. June 28, 2017. Accessed November 01, 2017. http://www.wired.co.uk/article/wannacry-ransomware-virus-patch.
-
-
-
Artificial Intelligence and Social Media to Aid Disaster Response and Management
Authors: Muhammad Imran, Firoj Alam, Ferda Ofli and Michael AupetitExtended AbstractPeople increasingly use social media such as Facebook and Twitter during disasters and emergencies. Research studies have demonstrated the usefulness of social media information for a number of humanitarian relief operations ranging from situational awareness to actionable information extraction. Moreover, the use of social media platforms during sudden-onset disasters could potentially bridge the information scarcity issue, especially in the early hours when few other information sources are available. In this work, we analyzed Twitter content (textual messages and images) posted during the recent devastating hurricanes namely Harvey and Maria. We employed state of the art artificial intelligence techniques to process millions of textual messages and images shared on Twitter to understand the types of information available on social media and how emergency response organizations can leverage this information to aid their relief operations. Furthermore, we employed deep neural networks techniques to analyze the imagery content to assess the severity of damage shown in the images. Damage severity assessment is one of the core tasks for many humanitarian organization.To perform data collection and analysis, we employed our Artificial Intelligence for Digital Response (AIDR) technology. AIDR combines human computation and machine learning techniques to train machine learning models specialized to fulfill specific information needs of humanitarian organizations. Many humanitarian organizations such as UN OCHA, UNICEF have used the AIDR technology during many major disasters in the past including the 2015 Nepal earthquake, the 2014 typhoon Hagupit and typhoon Ruby, among others. Next, we provide a brief overview of our analysis during the two aforementioned hurricanes.Hurricane Harvey Case StudyHurricane Harvey was an extremely devastating storm that made landfall to Port Aransas and Port O'Connor, Texas, in the United States on August 24-25, 2017. We collected and analyzed around 4 million Twitter messages to determine how many of these messages are, for example, reporting some kind of infrastructure damage, or reports of injured or dead people, missing or found people, displacements and evacuation, donation and volunteers reports. Furthermore, we also analyzed geotagged tweets to determine the types of information originate from the disaster-hit areas compared to neighboring areas. For instance, we generated maps of different cities in the US in and around the hurricane hit areas. Figure 1 shows the map of geotagged tweets reporting different types of useful information from Florida, USA. According to the results obtained from the AIDR classifiers, both caution and advice and sympathy and support categories are more prominent than other informational categories such as donation and volunteering. In addition to the textual content processing of the collected tweets, we perform automatic image processing to collect and analyze imagery content posted on Twitter during Hurricane Harvey. For this purpose, we employ state-of-the-art deep learning techniques. One of the classifiers deployed in this case was the damage-level assessment. The damage-level assessment task aims to predict the level of damage in one out of three damage levels i.e., SEVERE damage, MILD damage, and NO damage. Our analysis revealed that most of the images (∼86%) do not contain any damage signs or considered irrelevant containing advertisements, cartoons, banners, and other irrelevant content. Of the remaining set, 10% of the images contain MILD damage, and only ∼4% of them show SEVERE damage. However, finding these 10% (MILD) or 4% (SEVERE) useful images is like finding a needle in a giant haystack. Artificial intelligence techniques such as employed by the AIDR platform are hugely useful to overcome such information overload issues and help decision-makers to process large amounts of data in a timely manner.Fig. 1: Geotagged tweets from Florida, USA.Hurricane Maria Case StudyAn even more devastating hurricane than the Harvey that hit Puerto Rico and nearby areas was hurricane Maria. Damaged roofs, uprooted trees, widespread flooding were among the scenes on the path of Hurricane Maria, a Category 5 hurricane that slammed Dominica and Puerto Rico and has caused at least 78 deaths including 30 in Dominica and 34 in Puerto Rico, and many more left without homes, electricity, food, and drinking water.We activated AIDR on September 20, 2017 to collect tweets related to Hurricane Maria. More than 2 million tweets were collected. Figure 2 shows the distribution of the daily tweet counts. To understand what these tweets are about, we applied our tweet text classifier which was originally trained (F1 = 0.64) on more than 30k human-labeled tweets from a number of past disasters. AIDR's image processing pipeline was also activated to identify images that show infrastructure damage due to Hurricane Maria. Around 80k tweets contained images. However, ∼75% of these images were duplicate. The remaining 25% (∼20k) images were automatically classified by the AIDR's damage assessment classifier into three classes as before. Figure 2: Tweets count per dayWe believe that more information can be extracted from image about the devastation caused by the disaster than relying solely on the textual content provided by the users. Even though it is in the testing phase, our image processing pipeline does a decent job in identifying images that show MILD or SEVERE damage. Instead of trying to look at all the images, humanitarian organizations and emergency responders can simply take a look at the retained set of MILD or SEVERE damage images to get a quick sense of the level of destruction incurred by the disaster.
-
-
-
Fault Tolerant Control of Multiple Mobile Robots
Authors: Nader Meskin and Parisa YazdjerdiRecently, usage of autonomous wheeled mobile robots (WMRs) is significantly increased in different industries such as manufacturing, health care and military and there exist stringent requirements for their safe and reliable operation in industrial/commercial environments. In addition, autonomous multi-agent mobile robot systems in which specific numbers of robots are cooperating with each other to accomplish a task is becoming more demanding in different industries in the age of technology enhancement. Consequently, development of fault tolerant controller (FTC) for WMRs is a vital research problem to be addressed in order to enhance the safety and reliability of mobile robots. The main aim of this paper is to develop an actuator fault tolerant controller for both single and multiple-mobile robot applications with the main focus on differential derive mobile robots. Initially, a fault tolerant controller is developed for loss of effectiveness actuator faults in differential drive mobile robots while tracking a desired trajectory. The heading and position of the differential drive mobile robot is controlled through angular velocity of left and right wheels. The actuator loss of effectiveness fault is modeled on the kinematic equation of the robot as a multiplicative gain in the left and right wheels angular velocity. Accordingly, the aim is to estimate the described gains using joint parameter and state estimation framework. Toward this goal, the augmented discrete time nonlinear model of the robot is considered. Based on the extended Kalman filter technique, a joint parameter and state estimation method is used to estimate the actuator loss of effectiveness gains as the parameters of the system, as well as the states of the system. The estimated gains are then used in the controller to compensate the effect of actuator faults on the performance of mobile robots. In addition, the proposed FTC method is extended for the leader-follower formation control of mobile robots in the presence of fault in either leader or followers. Multi agent mobile robot system is designed to track a trajectory while keeping a desired formation in the presence of actuator loss of effectiveness faults. It is assumed that the leader controller is independent from the followers and is designed based on the FTC frame work developed earlier in this document. Also, the fault is modeled in the kinematic equation of the robot as a multiplicative gain and augmented discrete-time nonlinear model is used to estimate the loss of effectiveness gains. The follower controller is designed based on feedback linearization approach with respect to the coordinates of the leader robot. An extended Kalman filter is used for each robot to estimate parameters and states of the system and as the fault is detected in any of the followers, the corresponding controller compensates the fault. Finally, the efficacy of the proposed FTC framework for both single and multiple mobile robots is demonstrated by experimental results using Qbot-2 from Quanser. To sum up, a fault tolerant controller scheme is proposed for differential drive mobile robots in the presence of loss of effectiveness actuator faults. A joint parameter and state estimation scheme is utilized based on EKF approach to estimate parameters (actuator loss of effectiveness) and the system states. The effect of the estimated fault is compensated in the controller for both single robot and formation control of multiple mobile robots. The proposed schemes are experimentally validated on Qbot-2 robots.
-
-
-
Wearable V2X solution for Children and Vulnerable Road Users Safety on the Road
Authors: Hamid Menouar, Nour Alsahan and Mouhamed Ben BrahimAccording to Energy Technology Perspectives 2014, there are approximately 900 million light duty vehicles (not counting two- and three-wheelers) today, and that number will be doubled by 2050 to reach 2 billion. Such a considerable increase will bring-in further challenges for road safety and traffic efficiency. Motor vehicle crashes are the leading cause of death for children and young adults in the United States, with an annual death toll of 33,000 and over 2.3 million people injured. Those figures are representative of the challenge not only in the US but also in other regions including Qatar and the golf region. Vehicle to Vehicle and Vehicle to Infrastructure (V2X) communication technology, which is heading our roads in few years, is seen as a great solution to reduce number of accidents on the road, and it is considered as an enabler of next generation of road safety and Intelligent Transport System (ITS). V2X communication is not limited to vehicle-to-vehicle and vehicle-to-infrastructure, it is also meant to be used for vehicle-to-bicycle and even vehicle-to-pedestrian communication. Indeed, by enabling real-time and fast communication between vehicles and vulnerable road users such as bicycles and road users, we may make the road much safer for those vulnerable road users. This is one of the use cases we would like to enable and test in the Qatar V2X Field Operational Field (Qatar V2X FOT) which is supported and funded by Qatar National Research Fund (QNRF). Enabling vulnerable road-users such as bicycles and pedestrians with V2X capabilities has many challenges. The main challenge is the energy consumption, as V2X operates in 5.9GHz radio band, which consumes relatively high energy. Therefore, to operate V2X on those vulnerable users, especially the pedestrian, the user needs to hold a battery which requires to be regularly recharged. We came out with a solution to this problem, which reduces the energy consumption of the V2X device to a level it can operate a small battery suitable to be a wearable device. This poster will expose the challenges when using V2X on vulnerable road users, especially pedestrians, and it will present solution and the related prototype that is developed and tested within the NPRP project. The solution and prototype presented in this poster are the outcomes of the research project NPRP 8-2459-1-482 which is supported and funded by QNRF.
-
-
-
Evaluation of Hardware Accelerators for Lattice Boltzmann based Aneurysm Blood Flow Measurement
Clipping is a potential treatment for ruptured/unruptured brain aneurysm patients. In order to determine the profitable treatment and also clip's location, surgeons need to have the measurements such as velocity and blood pressure in and around the aneurysm.Typically, simulating the blood fluid and finding the corresponding measurements require heavy computation resources. The Lattice Boltzmann (LB) method is the conventional way to simulate the fluid dynamics and HemeLB is an open-source computational suite for 3D fluid dynamics simulations of blood flow in the vasculature of the human body. In this work, we aim to evaluate the hardware acceleration of LB and HemeLB on reconfigurable system on chip (SoC) and high performance computing (HPC) machine using RAAD platform.
-
-
-
Effective Realtime Tweet Summarization
Authors: Reem Suwaileh and Tamer ElsayedTwitter has been developed as an immense information creation and sharing network through which users post information. Information could vary from the world»s breaking news to other topics such as sports, science, religion, and even personal daily updates. Although a user would regularly check her Twitter timeline to stay up-to-date on her topics of interest, it is impossible to cope with manual tracking of those topics while tackling the challenges that emerge from the Twitter timeline nature. Among these challenges are the big volume of posted tweets (about 500M tweets are posted daily), noise (e.g., spam), redundant information (e.g., tweets of similar content), and the rapid development of topics over time. This necessitates the development of real-time summarization systems (RTS) that automatically track a set of predefined interest profiles (representing the users» topics of interest) and summarize the stream while considering the relevance, novelty, and freshness of the selected tweets. For instance, if a user is interested in following the updates on the “GCC crises», the system should efficiently monitor the stream and capture the on-topic tweets including all aspects of the topic (e.g., official statements, interviews and new claims against Qatar) which change over time. Accordingly, real-time summarization approaches should use simple and efficient approaches that can scale to follow multiple interest profiles simultaneously. In this work, we tackle such problem by proposing RTS system that adopts a lightweight and conservative filtering strategy. Given a set of user interest profiles, the system tracks those profiles over Twitter continuous live stream in a scalable manner in a pipeline of multiple phases including pre-qualification, preprocessing, indexing, relevance filtering, novelty filtering, and tweets nomination. In prequalification phase, the system filters out non-English and low-quality tweets (i.e., tweets that are too short or including many hashtags). Once a tweet is qualified, the system preprocesses it in a series of steps (e.g., removing special characters) that aim at preparing the tweet for relevance and novelty filters. The system adopts a vector space model where both interest profiles and incoming tweets are represented as vectors constructed using idf-based term weighting. An incoming tweet is scored for relevance against the interest profiles using the standard cosine similarity. If the relevance score of a tweet exceeds a predefined threshold, the system adds the tweet to the potentially-relevant tweets for the corresponding profile. The system then measures the novelty of the potentially-relevant tweet by computing its lexical overlap with the already-pushed tweets using a modified version of Jaccard similarity. A tweet is considered novel if the overlap does not exceed a predefined threshold. This way the system does not overwhelm the user with redundant notifications. Finally, the list of potentially-relevant and novel tweets of each profile is re-ranked periodically based on both relevance and freshness and the top tweet is then pushed to the user; that ensures the user will not be overwhelmed with excessive notifications while getting fresh updates. The system also allows the expansion of the profiles over time (by automatically adding potentially-relevant terms) and the dynamic change of the thresholds to adapt to the change in the topics over time. We conducted extensive experiments over multiple standard test collections that are specifically developed to evaluate RTS systems. Our live experiments on tracking more than 50 topics over a large stream of tweets lasted for 10 days show both effectiveness and scalability of our system. Indeed, our system exhibited the best performance among 19 international research teams from all over the world in a research track organized by NIST institute (in the United States) last year.
-
-
-
A Reconfigurable Multipurpose SoC Mobile Platform for metal detection
Authors: Omran Al Rshid Abazeed, Naram Mhaisen, Youssef Al-Hariri, Naveed Nawaz and Abbes AmiraBackground and Objectives One of the key problems in mobile robotics is the ability to understand and analyze the surrounding environment in a useful way. This is especially important in dangerous applications where human involvement should be avoided. A clear example of employing the robots in dangerous applications is mine detection which is mostly done through metal detection techniques. Among the various types of walking robots, Hexapod walking robots offer a good static stability margin and faster movement especially in rough terrain applications [1] Thus, the “Hexapod Terasic Spider Robot” is a suitable platform for the metal detection purpose especially that it is equipped with Altera DE0-Nano field programmable gate arrays (FPGA) SoC which allows for extremely high performance and accuracy. This work introduces a novel implementation of a metal detection module on the Terasic Spider Robot; the metal detection module is designed and interfaced with the robot in order to perform the metal detection. The user can control the robot and receive the feedback through a Bluetooth-enabled android phone. In addition, a general-purpose design flow that can be used to implement other applications on this platform is proposed. This proves the versatility of the platform as well.Method The designed metal detection module (MDM) is mainly based on an oscillator and a coil, its operation principle is that when the coil approaches a metal, the frequency of the oscillator will change [2]. This frequency change can be accurately monitored in real time using the FPGA SoC board. Thus, the module can be used for detecting metals. The metal detection module is interfaced with DE0-Nano SoC board where the detection algorithm is implemented. The development of the algorithm is carried out on the board available on this robot. The board includes a FPGA, which provides a high-performance and real-time implementation of parts of the algorithm, and a hard processor system (HPS) running Linux OS which can be used to easily interface the board with other computer systems and peripherals such as mobile phones and cameras[3]. As shown in Fig. 1, the detection algorithm is based on hardware/software co-design; the output of the MDM is provided to the FPGA part of the board in order to achieve an accurate and real-time monitoring. Upon detection, the FPGA sends a detection signal through the shared memory interface to the HPS part of the board. The HPS is then responsible for sending a warning to the mobile through multi-threaded communication application that is running on the HPS. Figure 1 General architecture of the metal detection system In order to implement the metal detection algorithm on the Terasic Spider Robot, it was necessary to formulate and follow the design flow provided in Fig. 2. This design flow can be used to implement other applications that can utilize the hardware/software co-design approach for better performance. Figure 2 General purpose design flow for the Altera Terasic Spider Robot Platform. Results and discussion Due to the coil specification and the circuit design. The frequency captured at normal situations is (no metal presence) is 2155 ± 20 Hz. The frequency increases Inversely proportional to the distance of the metal from the coil. In other words, the frequency increases when the distance between the metal and the coil decrease. When a metal whose size is at least the same size as the coil is present at 7 cm distance from the detection coil, the frequency will exceed 2200 Hz Regardless of the medium. The tested medium is wood. However, similar results were obtained with air medium. These numbers are specific to the proposed system. Changing the circuit parameters will increase the detection distance if desired. For example, having more coil turns and bigger diameter as well as faster oscillation will increase the detection distance. To avoid any interference between the robot body and the metal detection circuit readings, a 15 inches plastic arm is used to connect the metal detection module to the body of the robot. The electronics components is attached to this arm to the nearest possible point to the coil. The metal detection module attached to a plastic arm and then to the robot. the metal detection module and the spider robot is shown in Fig. 3 and 4 respectively. Figure 3 The Metal Detection Circuit Combined with the Arm Fig. 4 MDM Connected to the Terasic Spider Robot The robot is then controlled through a mobile application, the mobile application is modified so that the robot can send feedback (detection warning) to the mobile phone. Figure 5 shows an example of the notification message «Metal Detected» whenever a metal is detected. Figure. 5. Metal Detection Message for Mobile Application Interface Summary and Conclusion This abstract includes a general description of research project that aims to utilize the Terasic Spider Robot platform to perform accurate and real-time metal detection. This is an important application that helps humans avoid involvement in dangerous operations like mine detection. Nonetheless, a general-purpose design flow is proposed for the benefit of the research community and anyone who intends to implement an application on this platform in the future. Acknowledgment This project was funded by Qatar University Internal Grants program. References [1] Y. Zhu, B. Jin, Y. Wu, T. Guo and X. Zhao, «Trajectory Correction and Locomotion Analysis of a Hexapod Walking Robot with Semi-Round Rigid Feet», Sensors, vol. 16, no. 9, p. 1392, 2016. [2] T. Alauddin, M. T. Islam and H. U. Zaman, «Efficient design of a metal detector equipped remote-controlled robotic vehicle,» 2016 International Conference on Microelectronics, Computing and Communications (MicroCom), Durgapur, 2016, pp. 1-5 [3] «Cyclone V Device Overview», Altera, 2016. [Online]. Available: https://www.altera.com/en_US/pdfs/literature/hb/cyclone-v/cv_51001.pdf. [Accessed: 16- Oct- 2017]
-
-
-
MultiObjective SearchBased Requirements Traceability Recovery for Adaptive Systems
Authors: Mohamed Salah Hamdi, Adnane Ghannem and Hany AmmarComplex adaptive systems exhibit emergent behavior. This type of behavior occurs in volatile environments involving cyber-physical systems, such as those aimed at smart cities operations. Adaptive systems’ maintenance aims at improving their performance by dealing with the treatment of continuous and frequently changing requirements. Adaptive systems’ behavior requires therefore up-to-date requirements traceability. To this end, we need to understand the requirements and to localize the program parts that should be modified according to the description of the new requirements. This process is known as Requirements Traceability Recovery (RTR). The process of generating requirements traceability, when done by a human (e.g., system maintainer), is time consuming and error-prone. Currently, most of the approaches, in the literature, are time consuming and semi-automatic that always need the intervention of the user. In our work, we are specifically interested in following the link between requirements and the code of the software system with the aim of helping the designer to update the system appropriately by automating the traceability process. To do this, we formulated the RTR problem as a complex combinatorial problem that could be tackled using Heuristic Search (HS) techniques. These techniques can intelligently explore a big search space (space of possible solutions) for a problem and find an acceptable approximate solution. Varieties of HS techniques exist in the literature. In our work, we use the Non-dominated Sorting Genetic Algorithm (NSGA-II), which is an improved version of a classic Genetic Algorithm (GA). NSGA-II is a multi-objective technique that aims at finding the best compromise between objectives (Pareto Front). The application of NSGA-II to a specific problem (i.e., the requirements traceability recovery problem in our context) requires the accomplishment of the following five elements: 1. Representation of the individuals (vector, tree, etc.), 2. Definition of the evaluation function (fitness function), 3. Selection of the (best) individuals to transmit from one generation to another, 4. Creation of new individuals using genetic operators (crossover and mutation) to explore the search space, 5. Generation of a new population using the selected individuals and the newly created individuals. The proposed approach takes as input a software system, a set of requirements, and the maintenance history of the software system and produces as output the trace links, i.e., the artifacts (classes, methods, etc.) related to each requirement. Three objectives are defined to support the NSGA-II: (1) semantic similarity and (2) Recency of Change (RC), and (3) Frequency of Change (FC). We used the cosine of the angle between the vector that represents the requirement and the vector that represents the software element to measure the semantic similarity. To calculate the RC measure, we used the information extracted from the history of change accumulated during the maintenance process of the software system. The intuition behind introducing the RC measure is that artifacts (classes, methods, etc.) that changed more recently than others are more likely to change now, i.e., are related to the new requirements at hand. To calculate the FC measure, we used the information extracted from the history of change accumulated during the maintenance process of the software system. The intuition behind introducing the FC measure is that artifacts (classes, methods, etc.) that change more frequently than others are more likely to change now, i.e., are related to the new requirement at hand. Each solution consists of assigning each requirement to one or many artifacts (classes, methods) of the system. The solution should maximize as much as possible the three objectives mentioned before. Experiments have been conducted on three open source systems in order to evaluate the approach. The obtained results confirm the effectiveness of the approach in correctly generating the traces between the requirements and classes in the source code with a precision of 91% on average and a recall of 89% on average. We also compared our results to those obtained by two recent works. We noticed that our approach outperforms the two other approaches and has higher average for precision and recall for all three projects.
-
-
-
SLiFi: Exploiting Visible Light Communication VLC to Authenticate WiFi Access Points
Authors: Hafsa Amin, Faryal Asadulla, Aisha Jaffar, Gabriele Oligeri and Mashael Al-SabahThis work presents an effective and efficient solution (SLiFi) to the evil twin attack in wireless networks. The evil twin is a rogue Wifi Access Point (AP) that pretends to be an authentic one by using the same network configuration, including the (i) Service Set Identifier (SSID), (ii) the communication channel, and finally (iii) the MAC address of the purported AP. The evil twin is a trap set-up by an adversary willing to eavesdrop on the user's Internet traffic. The attack is relatively easy to implement, hard to detect and it can have a severe impact on a user's privacy. Many researchers focused on this attack and provided defences from different perspectives: network, access point and client side. Unfortunately, all the solutions provided so far are still not ready for mass deployment since they involve significant modifications to the 802.11 WiFi protocol. In the following, we report some of the most important ones. Gonzales et al. [1] proposed to construct a context vector containing the order of all APs detected at a particular time, with their SSID and RSSI values. This enables the client to compare its future associations with the stored context vector. Bauer et al. [2] proposed SWAT which is a request-response protocol. This approach provides a one-way AP authentication and allows the client to establish a connection to the network through a shared secret key to create a secure session based on the principle of trust-on-first-use (TOFU). Lanze et al. [3] introduced a new technique using the aircrack-ng suite. The tool airbase-ng is set up on all the devices and the beacon frames are collected from various APs. The proposed approach compares the Timing Synchronization Function (TSF) timestamps and their corresponding receiving times in order to spot anomalies due to message proxying and therefore, the presence of a malicious AP. Finally, Gangasagare et al. [4] propose a fingerprinting technique based on network traffic enabling to detect if the AP relays the traffic through another wireless connection. SLiFi does not require any changes to the already existing communication protocols and it enables the access point authentication (by the users) in a fast and reliable way. Indeed, SLiFi enables the user to authenticate the legitimate AP by exploiting a Visible Light Communication (VLC) channel. SLiFi involves two parties, i.e., the (honest) AP provided with a Wi-Fi interface and able to transmit data through a VLC channel, and an end-user, provided with a software that enables data to be read from a VLC channel, e.g., by using a webcam. SLiFi exploits four phases: AP's Public Key (PubKey) broadcast. The AP transmits its own PubKey to the end-user via an authenticated channel (VLC). The PubKey broadcast process is completely transparent to the user since each bit of the PubKey is delivered by quickly switching on and off the light of the room in which the user is. This is achieved by standard techniques of VLC: the human eye cannot perceive the fast blinking light but other devices, such as special webcams, can detect the brightness change. Subsequently, the brightness changes can be translated to a sequence of bit values. Seed generation. The end-user retrieves the public key from the VLC channel by using a webcam and transmits back to the AP a randomly generated seed encrypted with the AP's public key. The PubKey is securely delivered to the user since any other non-authorized light source can be easily spotted. Therefore, only one authorized VLC transmitter will be in place and it will deliver the PubKey of the AP. The client can now use the trusted PubKey to send back to the AP an encrypted seed to be used for the key generation. Secret key generation. The AP receives the user's encrypted seed via the Wi-Fi channel, decrypts the seed using its private key, and sends an acknowledgment message encrypted with the seed back to the end-user. This phase performs the key-agreement and both the AP and the user's device converge to a shared secret key. Encrypted communication. Any further communications between the end-user and the AP will be encrypted with the shared secret key, i.e., the seed generated by the client. SLiFi is compliant with multiple clients, indeed the AP can easily deal with concurrent communications. Moreover, from a practical perspective, SLiFi can be adopted to only generate the shared secret key and passing it to the already existing encryption algorithm, e.g., WPA2 or WPA2-Enterprise. To evaluate SliFi, we built a proof-of-concept using a (1) Raspberry Pi which emulates the AP, a (2) set of LEDs to transmit the PubKey, and (3) standard laptops to act as clients with webcams. All the software components have been implemented and tested. We performed several tests to evaluate the feasibility of our solution. To test reliability of VLC transmission, we ran various experiments to measure the Public key transmission errors as a function of the VLC bit-rate, and we observed that PubKey can reliability transmitted within a reasonable time frame. Finally, our results prove the feasibility of the solution in terms of time to establish the key and robustness to the evil-twin attack. References 1. H. Gonzales, K. Bauer, J. Lindqvist, D. McCoy, and D. Sicker. Practical Defenses for Evil Twin Attacks in 802.11. In IEEE Globecom Communications and Information Security Symposium (Globecom 2010), Miami, FL, December 2010. 2. K. Bauer, H. Gonzales, and D. McCoy. Mitigating Evil Twin Attacks in 802.11. January 2009. 3. F. Lanze, A. Panchenko, T. Engel, and I. Alcaide. Undesired Relatives: Protection Mechanisms against the Evil Twin Attack in IEEE 802.11. 4. M. Gangasagare. Active User-Side Evil Twin Access Point Detection. International Journal of Scientific & Engineering Research, May 2014.
-
-
-
QEvents: RealTime Recommendation of Neighboring Events
Authors: Heba Hussein, Sofiane Abbar and Monishaa ArulalanTechnology always seeks to improve the little details in our lives for a faster and a more efficient life-pace. One of these little problems we face in our daily lives is finding relevant events. For example you visit a place like Katara with your kids and you spend your time in vain looking for a fun event, and after you leave the venue a friend of yours tells you about this interesting “Henna and face painting workshop organized in building 25.”. To solve this problem we propose QEvents. QEvents is a platform that provides users with real-time recommendations about events happening around their location and that best match their preferences. QEvents renders the events in a map-centric dashboard to allow easy browsing and user-friendly interactions. QEvents continuously listens to online channels that broadcast information about events taking place in Qatar, including specialized websites (e.g. eventsdoha.com), social media (e.g. Twitter), and news (e.g. dohanews.com). The main challenge QEvents strives to solve is how to extract important features such as title, location, and time from free text describing the events. We will show in this paper how one could leverage existing technologies such as Topic modeling, Named Entity Recognition, and advanced text parsing to transform a plain event listing website into a dynamic and alive service capable of recognizing events’ location, title, category, as well as starting and ending time, and nicely rendering them in a map-centric visualization allowing a more natural exploration.
-
-
-
Driver Drowsiness Detection Study using Heart Rate Variability analysis in Virtual Reality Environment
Introduction Mobility and road safety is one of the grand challenges that Qatar is facing during the last decade. There are many ways to enhance the road safety. One way is to characterize the factors contributing to the road fatalities. According to Transport Accident Commission, about 20% of fatal road accidents are caused by driver fatigue [1]. As reported by Monthly Qatar Statistics in [2], the total number of deaths for the first 8 months of the current year is 116. Thus, around 23 of the casualties are caused by driver fatigue. According to the U.S. Department of Transportation's NHTSA, in 2016 the number of fatalities involving drowsy driver is 803, which is 2.1% of total fatalities in the same year in US [3]. Therefore, it is essential to design and implement an embedded system in vehicles that can analyze, detect, and recognize the driver's state. The main aim of this project is to detect and recognize different drowsiness states using electrocardiogram (ECG) based Heart Rate Variability (HRV) analysis through heartbeats data acquisition while he/she is driving the car in different timings of the day. Then an alarm is produced before the driver's situation reaches the dangerous case that might lead him/her to involve in an accident. Background A driver»s drowsiness state can be detected through different methods. One of the most accurate methods is to get the HRV information acquired from Electrocardiogram (ECG) signal helps to identify different states like awake, dizziness, drowsiness and sleep behind the steering. HRV describes the involuntary nervous function, which is in fact the R-to-R interval (RRI) variations of an acquired ECG signal [4]. By identifying the RRI as well as the distance between the RR peaks, we can decide if the driver is in drowsy state or not, by analyzing HRV time and frequency domain features. Low Frequency (LF) band (0.04–0.15 Hz) describes the sympathetic and parasympathetic activities of the heart activity whereas; High Frequency (HF) band (0.15–0.4 Hz) describes only the parasympathetic activities of the heart activity [4]. The LF/HF ratio reflects the differences between awake and drowsy states while the ratio was decreasing gradually from the awake state to drowsy state [5-6].Method A portable wireless BioRadio (Fig.2 A) (Great Lakes NeuroTechnologies, Inc.) Electrocardiogram (ECG) system was used with three Ag/AgCl electrodes attached to a participant's chest. The points of attachment are (i) two electrodes under the right and left collarbone, and (ii) one electrode under the lowest left rib bone of the participant. ECG signal was band passed through a filter (0.05-100 Hz) digitized at a sampling frequency of 500 Hz with 12-bit resolution to be displayed on the device GUI software BioCapture. Data were stored from BioCapture software on the hard disk of an Intel Core i7 Personal Computer for off line analysis. The simulation of highway driving was created in virtual reality 3D cave environment (Fig. 2B) (in VR lab, Research Building, Qatar University). Simulation scenario was a two-way highway with two lanes in each direction, low density of traffic, late afternoon and/night environment, path with no sharp curves and rural environment with far apart trees. ECG data were recorded from three subjects while the subjects were driving monotonously a car in VR environment during active and drowsy states. A camera from the front was used to detect the drowsiness stages, and to segment the ECG data based on drowsiness. ECG data of each subject was exported using the Bio-Capture software and segmented using CSV splitter to analyze the data by Kubois software. ECG signal was recorded from each subject for one hour approximately until the subject becomes drowsy. The one-hour sample data was splitted into six segments, each with 10 minutes duration. This was done to make the analysis of each sample easier and to be able to specify and identify exactly the time when the subject was awake and/or drowsy. Result and Discussion Fig. 3 shows the sample ECG trace from subject one and selected RR intervals were calculated using Kubios HRV software and the RR series was produced by interpolation. This RR time series was used to calculate heart rate (HR) and HRV using the same software. The RR time series was used to calculate the power spectral density (PSD) by applying the Fast Fourier Transform (FFT) method to identify the LF and HF frequency component of the HRV. Figure 4 shows the PSD averaged over trials for sample participants in case of active and drowsy states. As it can be seen from Fig. 4, there is a significant difference in the LF/HF ratios, as it decreased drastically from 4.164 (Fig. 4A) when subject was awake to 1.355 (Fig. 4B) when subject was drowsy. In addition, HF and LF alone can be taken as indicators for drowsiness. The HF increased from 163 ms2 when subject was awake to 980 ms2 when subject was drowsy. Moreover, the LF value also increased from 679 to 1328 ms2. The summary of the LF/HF for different participants are shown in Table 1. Table 1 clearly shows that LF/HF is higher for all the subjects during their active states and the ratio is decreasing, as the subject was getting drowsy. This result is in line with the findings of other researchers.Conclusion It can be summarized from the findings from this experiment that the HRV based drowsiness detection technique can be implemented in single board computer to provide a portable solution to be deployed in the car. Depending on the sleep stages detected through HRV analysis, the driver can be alerted through either piezoelectric sensor or audible alerting message, which will help to reduce significant road accidents.
-
-
-
Scientific Data Visualization in an Immersive and Collaborative Environment
More LessTremendous interest in visualizing massive datasets has promoted tiled-display wall systems that offer an immersive and collaborative environment with an extremely high-resolution. To achieve an efficient visualization, the rendering process should be parallelized and distributed among multiple nodes. The Data Observatory at Imperial College London has a unique setup consisting of 64 screens powered by 32 machines providing a resolution of over 130 megapixels. Various applications have been developed to achieve a high-performance visualization by implementing parallel rendering techniques and incorporating distributed rendering frameworks. ParaView is one such application that targets the visualization of scientific datasets by taking computing efficiency into consideration. The main objective of this project is to leverage the potential of the Data Observatory and ParaView regarding visualization by fostering data exploration, analysis, and collaboration through a scalable and high-performance approach. The primary concept is to configure ParaView on a distributed clustered network and associate the appropriate view for each screen by controlling ParaView's virtual camera. The interaction events with the application should be broadcasted to all connected nodes in the cluster to update their views accordingly. The major challenges of such implementations are synchronizing the rendering across all screens, maintaining data coherency, and managing data partitioning. Moreover, the project is aimed at evaluating the effectiveness of large display systems compared to typical desktop screens. This has been achieved by conducting two quantitative studies assessing the individual and collaborative task performance. The first task was designed to investigate the mental rotation of individuals by displaying a pair of a 3D model, as proposed by Shepard-Metzler, on the screen with different orientations. Then, the participant was asked if both models were the same or mirrored. This would lead to evaluate the individual task performance by studying the ability to recognize the orientation change in 3D objects. It consisted of two levels: easy and hard. For the easy level, the second model was rotated for a maximum angle of 30 on two axes. In contrast, the hard level had no limitation on the angle of rotation. The second task was developed specifically for ParaView to assess the collaboration aspect. The participants had to use the basic navigational operations to find hidden needles injected in a 3D brain model in 90 seconds. In both tasks, the time taken to complete the task and the correctness were measured in two environments: 1) the Data Observatory, and 2) a simple desktop screen. The average correct responses of the mental rotation task have been calculated for all participants. It has been shown that the number of correct answers in the Data Observatory is significantly higher than on the desktop screen despite the amount of rotation. The participants could better distinguish mirrored objects from similar ones in the Data Observatory with a percentage of 86.7% and 73.3% in easy and hard levels, respectively. However, on the typical desktop screen, participants correctly answered less than half of the hard level questions. This indicates immersive large display environments provides a better representation and depth perception of 3D objects. Thus, improving the task performance of visualizing 3D scenes in fields that require the ability to detect variations in the position or orientation. Overall, the average completion time of both displays in the easy task is relatively the same. In contrast, the participants required a longer time to complete the hard task in the Data Observatory. This could be because of the large display space, which occupies a wide visual field, thus providing an opportunity to the viewers to ponder and think about the right answer. In the collaborative search task, the participants found all the hidden needles within the time limitation in the Data Observatory. The fastest group completed the task in 36 seconds while the longest recorded time was around one minute and 12 seconds. However, on the desktop screen, all participants consumed the full 90 seconds. In the small screen environment, the mean of the correct responses is estimated as 55%. The maximum number of needles found was 3 out of 4, which was achieved by only one group. To evaluate the overall efficiency of the Data Observatory, the one-way ANOVA test was used to find significant effects regarding the correctness of both tasks. The completion time was discarded from this analysis because of the differences in the tasks’ nature. The ANOVA revealed a significant impact of the display type in the number of correct responses, F1,48 = 10.517, p < 0.002. This indicates participants performed better in the Data Observatory in contrast to the simple desktop screen. Therefore, these results support the hypothesis of the effectiveness of large displays in improving the task performance and collaborative activities in terms of accuracy. The integration of both system solutions provides a novel approach to visualize the enormous amount of data generated from complex scientific computing. This adds great value to researchers and scientists to analyze, discuss, and discover the underlying behavior of certain phenomena.
-
-
-
Virtual Reality Game for Falconry
Authors: Noora Fetais, Sarah Lotfi Kharbach, Nour Moslem Haj Ahmad and Salma Salah AhmadTraditions and culture play a major role in our society, as they are the source of a person's pride and honor. One of Qatar's National Vision 2030 pillars that relates to the social development aims at preserving Qatar's national heritage. Thus, from this perspective, an innovative idea to use Virtual Reality (VR) technology to preserve traditions evolved. The game simulates the genuine Qatari Hunting Sport, which is considered as one of the most famous traditional sports in Qatar. However, practicing this sport is very expensive in terms of time, efforts and resources. Since this sport is challenging physically, only male adults can join. This project will not only preserve the traditional sport from extinction, but will also allow children from both genders to participate in it. The game will be an innovative means to help spreading Qatari heritage by commercializing it to the world. Moreover, it will help to learn the rules of such a sport in a safe and entertaining environment. The game is one of its kind since it is merging technology and heritage at the same time. The game is a virtual reality game that teaches younger generations about their antecedents' pastimes. It is a simulation of traditional falcon sport that will teach children, step by step and in an attractive manner, the basics of the sport like holding the falcon, making the falcon fly, and much more. In addition to that, we are cooperating with a hardware team from computer engineering that is working on customizing a glove that will ensure total immersion of the player in the game by making him feel pulled whenever the falcon is on his hand and release the pull when the falcon is not. Another main idea behind this project is to develop a strong relationship between the Qatari people and their heritage, which would then be more accessible throughout the year, instead of only special occasions. It will also help expats in Qatar to explore such an extraordinary heritage game on national events like national day, sport day... This project stands out with its original idea and captivating implemented features like the desert environment, the realistic audios, the visual effects, the gameplay... The game in not limited to only visual effects, although it is a key element, yet behind it countless algorithm implementations and deployment processes. It was crucial to conduct an ethnography study to accurately simulate the game by visiting Qatari society of AlGannas a specialist meeting with a specialist mentor to know more about the hunting sport in Qatar, and collecting more information about different falcon species in the state. This game can serve as a great ambassador of the Qatari falconry hunting sport in local and international events. Falconry is not limited to Qatar. Since 2012, this sport has been considered as an intangible cultural heritage of humanity according to the UNESCO. We tried to customize the game to make it exclusively designed for Qatar by adding features that only Qatari hunters do like holding the falcon on the left hand only.
-
-
-
Robotic Probe Positioning System for Structural Health Monitoring
Authors: Ali Ijaz, Muhammad Ali Akbar and Uvais QidwaiStructural health Monitoring (SHM) is a very critical component for sustainable civil and mechanical structures in modern urban settings. The sky-scrappers and huge bridges in modern metropolis today are essential aspects of the prosperity and development of a country but at the same time they present a great challenge in terms of maintaining and sustaining the structures in a good health. Due to the complex designs of these structures, it is typically very dangerous to do SHM tasks through human personnel. Deployment of a monitoring team with various forms of equipment and scaffolding accompanied with their hoisting machines becomes extremely exorbitant for the maintenance and planning of the structures causing unnecessary cost-spill on other areas of the available budget. For most of the metallic structures, a fast method of scanning an area more closely is the Magnetic Flux Leakage (MFL) based defect detection. The MFL is considered the most economical approach for inspecting the metallic structures. Traditionally a hand-held device is used for performing the MFL inspection. In this paper, an autonomous MFL inspection robot has been presented which is small, flexible and remotely accessible. The robot is constructed with an Aluminum chassis, driven by two servomotors and holds a stack of very powerful Neodymium magnets to produce the required magnetic circuit. As the robot moves on a metallic surface, the magnetic circuit produces a layered magnetic field just under the scanning probe. The probe is composed of several Hall-effect sensors to detect any leakage in the magnetic circuit, which happens due to abnormality in the surface, thus detecting an anomaly. In this paper, a coordinated robotic inspection system has been proposed that utilizes a set of drones with one positioning robotic crawler platform with additional load hoisting capabilities that are utilized in order to position a specific defect-locating probe on the building under scan. Proposed methodology can play a vital role in SHM since it is capable of scanning a specific area and transmit back the results in a shorter time with a very safe mode of operation. This method is more reliable as compared to fixed sensors that focus a particular area of the structure only. Design for SHM robot involves intelligent integration of navigation system comprising of crucial parts that act as its backbone and assist the robot to work autonomously. These parts include GPS module, compass, range sensor, Infrared (IR) sensor along with MFL probe and winch setup and powerful PMDC Servo Motor controller (MC 160) used to drive two (2) powerful motors. The MC160 brushed Motor Controller proves to be a perfect platform for controlling Brushed DC motors. The controller consists of two power drivers in addition to OSMC connector for a third power driver (winch motor control). All these things add extra degrees of freedom to the robotic system for SHM. Novelty of the methodology is that the robot's program logic is not fixed. It is flexible in terms of path following. It has ability to detect an obstacle while it is on its way to scan the building. It not only detects obstacle but also changes its course and automatically adopts new route to the target destination. Such an autonomous robotic system can play a vital role in Structural Health Monitoring (SHM) in contrast to manual inspection eliminating the need of physical presence of human in severe weather conditions. The presented methodology is condition based in contrast to schedule-based approach. Core scan is easily done and robot is reconfigurable in a sense that it automatically changes its course to adopt to rough terrain and avoids obstacles on its way. Easy deployment makes robot an excellent choice for SHM with minimum cost and enhanced flexibility. Proposed robotic system can perform a coarse level of scan of a tall building using drones and the probe deployment robots (PDR). The drones provide a rough estimate of the location of possible defect or abnormality and PDR inspects the anomaly more closely. In addition, the coarse information about a possible defect can also help in deploying other means of inspection in a much lower cost since the whole structure needs not to be inspected.
-
-
-
Coordinated Robotic System for Civil Structural Health Monitoring
Authors: Muhammad Ali Akbar and Uvais Ahmed QidwaiWith the recent advances in sensors, robotics, unmanned aerial vehicles, communication, and information technologies, it is now feasible to move towards the vision of ubiquitous cities, where virtually everything throughout the city is linked to an information system through technologies such as wireless networking and radio-frequency identification (RFID) tags, to provide systematic and more efficient management of urban systems, including civil and mechanical infrastructure monitoring, to achieve the goal of resilient and sustainable societies. In the proposed system, unmanned aerial vehicle (UAVs) is used to ascertain the coarse defect signature using panoramic imaging. This involves image stitching and registration so that a complete view of the surface is seen with reference to a common reference or origin point. Thereafter, crack verification and localization has been done using the magnetic flux leakage (MFL) approach which has been performed with the help of a coordinated robotic system. In which the first modular robot (FMR) is placed at the top of the structure whereas the second modular robot (SMR) is equipped with the designed MFL sensory system. With the initial findings, the proposed system identifies and localize the crack in the given structure. Research Methodology: The proposed approach used the advantages of the visual and MFL inspection approach to improve the efficiency of the SHM. Therefore, the usage of both approaches should be done in a way that the whole inspection is carried out in an optimal time period. Thus, due to the fast processing of visual inspection, it is done first followed by an MFL based verification approach. The visual inspection has been carried out such that the drone will take-off from a fixed point and take images at different heights without changing the GPS coordinate values of start point during flight. After completing the first scan, the coordinates of the GPS will be shifted and same procedure of taking images at different heights will be conducted. The process remain continue until the drone reaches to the starting GPS coordinates. The images which were taken at different heights for particular coordinates are considered as a single set. Thereafter, the image stitching (IS) is applied on individual sets. The process of IS involves a series of steps which were applied on the consecutive images of a particular set, such that one of the image is taken as a reference image (RI) whereas the other one is termed as the current image (CI). The resultant stitched image will be RI for the next consecutive image and then the whole stitching process is applied. The process remain continue for each set until a final stitched image has been obtained from them. The stitched result will be saved in the database with its corresponding GPS values. The same procedure of taking and stitching the images of the same structure will be repeated again after few months, depending upon the structural sensitivity as well as the severity of the weather condition around it. The current results will be compared with the stitched images present in the data base and if some anomaly is detected then the HP coordinates (i.e. the GPS coordinates) along with the estimated height for that particular location will be sent to the FMR to proceed the crack verification using MFL. The GPS module present in the FMR will guide the robot about its own location. As soon as Arduino Mega2560 Microcontroller receives the GPS coordinates from the system. It will translate them and compare them with its current location. The need of translation is because the FMR is present at the top of the building whereas the drone is flying at particular distance from the building. In order to obtain a correct translation the drone should remain at particular distance form in structure during the whole scanning process. The robot will take its direction based on the comparison result between its current GPS coordinates and the translated received GPS coordinates. As the robot moves it will keep checking the current GPS values and take decision accordingly. Since there might be some temporary or permanent obstacle present on the roof for decoration purpose. Therefore an ultrasonic range sensor has been used such that when the robot come close to an obstacle at defined distance the sensor will guide the robot to change its path and as soon as the obstacle is disappeared from the sensor range the robot will again start checking the GPS value to reach to its target destination. As it reaches to the target destination it will instruct the wrench motor to allow the SMR to reach to the location and obtain the current MFL reading of that place. These readings will be sent to the System. If an anomaly is detected then it is verified that the structure is having deformation at that particular location. If in vision based approach multiple anomalies have been detected then the robot will perform same procedure to determine the faults. Conclusion: With the initial findings, the proposed system appears to be a robust and inexpensive alternative to current approaches for automated inspection of civil/mechanical systems. The combination of VI and MFL approach provided the opportunity to detect, verify and localize the deformation in the structure.
-
-
-
Visualization of Wearable Data and Biometrics for Analysis and Recommendations in Childhood Obesity
Authors: Michael Aupetit, Luis Fernandez-Luque, Meghna Singh, Mohamed Ahmedna and Jaideep SrivastavaObesity is one of the major health risk factors behind the rise of non-communicable conditions. Understanding the factors influencing obesity is very complex since there are many variables that can affect the health behaviors leading to it. Nowadays, multiple data sources can be used to study health behaviors, such as wearable sensors for physical activity and sleep, social media, mobile and health data. In this paper we describe the design of a dashboard for the visualization of actigraphy and biometric data from a childhood obesity camp in Qatar. This dashboard allows quantitative discoveries that can be used to guide patient behavior and orient qualitative research. CONTEXT Childhood obesity is a growing epidemic, and with technological advancements, new tools can be used to monitor and analyze lifestyle factors leading to obesity, which in turn can help in timely health behavior modifications. In this paper we present a tool for visualization of personal health data, which can assist healthcare professionals in designing personalized interventions for improving health. The data used for the tool was collected as part of a research project called «Adaptive Cognitive Behavioral Approach to Addressing Overweight and Obesity among Qatari Youth» (ICAN). The ICAN project was funded by the Qatar National Research Fund (a member of Qatar Foundation) under project number NPRP X- 036- 3–013. The participants in the study were involved in activities aimed at improving their health behavior and losing weight. All participants and their parents/guardians provided informed consent prior to participation. Data from various sources (social media, mobile, wearables and health records) were collected from subjects and linked using a unique subject identifier. These datasets provided what we have defined as a 360-degree Quantified Self (360QS) view of individuals We have focused on the visualization of the biometrics and physical activity data. We proposed different visualization techniques to analyze the activity patterns of participants in the obesity trial. Our dashboard is designed to compare data across time, and among reference individuals and groups. DATA FROM OBESE CHILDREN Biometric data were measured periodically and included height, weight and the derived body-mass index (BMI), body fat percentage, waist circumference and blood pressure for each individual. Physical activity data was collected via accelerometers. The raw signals have been quantized into four activity levels: sedentary, light, moderate and vigorous, using a human activity recognition algorithm. INTERACTIVE ANALYTIC DASHBOARD The objective of the dashboard is to provide an overview of the actigraphy data and enable primary data exploration by an expert user. In a Control Panel, drop down menus enable selecting two subjects or groups of subjects to be compared based on identifiers and gender. During data collection, some devices were not worn at all times; hence they recorded long periods of «sedentary» activity. The user can use check-boxes to select the biometrics she wants to compare (e.g., BMI and Body Fat Percentage). A Visualization Panel shows both selected (groups of) subjects as bar charts indicating the hours of activity with activity level breakdown per day through time. The color legend of the bars is shown in the control panel: reddish colors for moderate (light red) and vigorous (dark red) activity levels, and bluish colors for light (light blue) and sedentary (dark blue) activity levels. The user can select a time window by brushing horizontally on the time range to zoom in or out. Two line chart show the biometrics selected in the control panel, with line colors corresponding to the selected (groups of) subjects. The average activity breakdown by activity levels is also displayed for weekdays and for weekend days. QUANTITATIVE ANALYSIS AND SUPPORT Thanks to our dashboard we can easily identify trends in biometrics, and compare activity level during week days and week-end days to support lifestyle recommendation. CONCLUSION This interface is a tool to give primary overview of the data likely to orient more detailed analysis. For instance, a more in-depth study of the relation between sleep duration and BMI could be conducted. Another outcome related to the experimental setup would consist of recommending biometrics to be measured more often, or to find incentives for subjects to wear the devices more consistently. A health expert could also provide the subject with a target status (e.g. weight) to compare and converge to, along with recommendations about the activities he/she should improve: e.g., go to bed earlier, wake up earlier during weekend, have more vigorous activities during afternoon, etc. Other available tools, such as the Fitbit dashboard (Fitbit Inc, USA), do not give detailed activity levels across time nor comparison with reference individual. Our next steps include performing qualitative evaluation of our dashboard and improvements based on the end users» feedback.
-