- Home
- Conference Proceedings
- Qatar Foundation Annual Research Conference Proceedings
- Conference Proceeding
Qatar Foundation Annual Research Conference Proceedings Volume 2016 Issue 1
- Conference date: 22-23 Mar 2016
- Location: Qatar National Convention Center (QNCC), Doha, Qatar
- Volume number: 2016
- Published: 21 March 2016
501 - 520 of 656 results
-
-
Accelerating Data Synchronization between Smartphones and Tablets using PowerFolder in IEEE 802.11 Infrastructure-based Mesh Networks
Authors: Kalman Graffi and Andre IppischSmartphones are nowadays widely available and popular in first, second and third world countries. Nevertheless, their connectivity is limited to GPRS/UMTS/LTE connections with small monthly data plans even if the devices are nearby. Local communication options such as Bluetooth or NFC on the other hand only provide very poor data rates. Future applications envision smartphones and tablets as main working and leisure devices. Envisioning the upcoming trends on multimedia and working documents, file sizes are expected to grow further. In strong contrast to this, smartphone users are faced with very small data plans per month that they can use to communicate over the Internet. In maximum a few GB are available per month, while regularly 1 GB or much less mobile data traffic is available to smartphone users worldwide. Thus, the future data exchange between smartphones and tablets is very much limited through the current technology and business models. Even if the business models on wireless Internet plans would allow unlimited data exchange, still most regions of the world would be limited in connectivity and data exchange options. This striking limitation in the connectivity of the future's main communication devices, namely smartphones and tablets, is striking as with current solutions the data exchange between these devices is strongly handicapped. Taking into account that data exchange and synchronization often takes place to geographically close areas, to closeby friends or nearby colleagues, it is very strange that the local data exchange is so strongly limited due to missing or traffic-limited connectivity to the Internet.
In this presentation, we present a novel approach to enhance high speed data transfers between smartphones and/or tablets in local environments. In specific, we enable with our approach high transmission speeds of up to empirically tested 60 Mbit/s between nearby nodes in any environment. Our approach does not require any contract with Mobile Internet Providers and is not limited to any data plan restriction. We introduce a novel IEEE 802.11 Infrastructure Mode based mesh networking approach, which allows Android phones to create multihop mesh networks, despite the limitation of the WiFi standard. With this approach, wireless multihop networks can be built and are evaluated in empirical measurements, allowing the automatic synchronization and data exchange between smartphones and tablets in the near of up to 100 meters for a single hop and several wireless hops distance. The use cases are considering colleagues working in the same building that would like to exchange data on their smartphones and tablets. The data requests and offerings are thereby signaled in the multihop environment and dedicated wireless high-speed connections are established to transfer the data. Another use case is the local communication with mails, images and chat messages with friends nearby. As the communication approach also supports multicasting, several users can be addressed with single messages. Another use case is a wireless file sharing or data exchange service which allows users to specify their interest such as in action movies at a campus or information on sale promotions in a mall. While the user walks along, his smartphone picks up relevant data without user interaction. One relevant use case for Qatar is the area of smart cities, smart devices would also be able to pick up sensor data from buildings, local transport vehicles or citizens in a delay-tolerant fashion and thus using the GPS functionality of Android to deliver accurate sensor information tagged with the location and time of the sensor snapshot. Finally, this local, high-speed wireless data synchronization approach allows to implement a data-synchronization approach such as Dropbox, Box or PowerFolder between Smartphones and tablets, a novel feature on the market. To implement this idea, we are collaborating with PowerFolder, Germany's market leader in the field of data synchronization at universities and in academia. From the technology side, we observe that Android has a market share of around 80% worldwide as operating system of smartphones. While Android supports very well the connection to the Internet and cloud-based services, it only offers Bluetooth and NFC for local connectivity, both technologies only provide very low data rates. Wifi-Direct is also supported but requires similarly to Bluetooth lots of user interaction and thus does not scale to many contact partners. The IEEE 802.11 standard supports an ad hoc mode for local, high-speed wireless communication. Unfortunately, Android does not support the IEEE 802.11 ad hoc mode which would allow local high speed connections of up to 11 Mbit/s. Instead, we use the Infrastructure Mode of IEEE 802.11 to create ad hoc wireless mesh networks which support much higher data rates, of up to 54 Mbit/s and even more. Please note that WiFi Direct also claims to allow similar performance, but it heavily requires user interaction and failed in our empirical studies to connected more than 3–4 nodes reliably, we are aiming for hundreds of nodes interaction in a delay-tolerant manner without user interaction. Our aim is to 1. use only unrooted Android functionality, 2. to allow nodes to find themselves through the wireless medium, 3. to automatically connect and exchange signaling information without user interaction, 4. to decide on messages to exchange in a delay-tolerant manner supporting opportunistic networking, 5. to coordinate the transmission between various nodes in case that more than one node is in proximity, 6. allow single hop data transfers based on the signaled have and need information,7. allow multihop node discovery and thus 8. multihop routing of data packets. We reach our aim through using the unrooted Android API that allows apps to open a WiFi hotspot for Internet tethering. While we do not want to use the Internet, the service also allows clients to connect to the hotspot and to exchange messages with the hotspot. Using this approach, i.e. when an Android phone or tablet opens a hotspot, other Android devices running the App can join in without user interaction. For that the App on the joining node creates a list of known WiFi networks, which are consistently named: “P2P-Hotspot-[PublicKeyOfHotspotNode]”. The public key of the hotspot node is used as unique identifier of the node and as option to asymmetrically encrypt the communication to that node. With this Android API functionalities we are able to dynamically connect unrooted, casual Android devices that run our App. The IEEE 802.11 Infrastructure Mode that we are using brings the characteristics that the access point, in our case the hotspot, is engaged in any communication with attached clients. Clients can only communicate to each other through the hotspot. The clients all share the available bandwidth of the WiFi cell to communicate to the hotspot. Thus for a set of nodes, it is more advisable to have dedicated 1-to-1 connections with one node being the hotspot and the other one the client for fast high-speed transmission rather than having all nodes connected over the hotspot and sharing their bandwidth. In order to to support this, we differentiate between a dedicated signaling phase and a dedicated data transfer phase. Nodes are scanning the available WiFi networks and look for specific SSIDs, namely “P2P-Hotspot-[PublicKeyOfHotspotNode]”, these are available hotspots. As the App considers these networks as known and the access key is hardcoded, a direct connection can be established without user interaction. These steps fulfill requirements 1. and 2. Next, the node is signaling the data it contains for the various destination nodes also addressed to nodes with specific public keys as node identifiers. The nodes can also signal data that they generally share based on a keyword basis. The hotspot gathers these signaling information from its clients and creates an index on which node has what (have list) and wants what (want list). Based on this, the potential matches can be calculated and are communicated to the clients. In order to have direct high speed connections, these node release their connection to the hotspot and establish a new connection between each other: one as a hotspot and one as connected client. This step is very dynamic and allows closely located nodes to connect and exchange data based on their signaling to the previous hotspot. The freshly connected nodes also signal again their offerings and interest, to confirm the match. The following high speed 1-to-1 data transfer can reach up to 60 Mbit/s, which is much more than the 11 Mbit/s of the traditional IEEE 802.11 ad hoc mode. Once they are done with the transfer, they release their link and listen again on the presence of coordinating hotspots. If no hotspot is available, based on random timeouts, they themselves offer this role and create a hotspot. Hotspots are only actively waiting for clients for a short time and then trying to join in a hotspot themselves. The roles fluctuate constantly. Thus the network is constantly providing connection options to nearby nodes through dynamic hotspotting. Hotspots index the content of connected clients and coordinate ideally matching 1-to-1 pairs for high speed transfers. Thus, requirements 2.-6. are resolved. Finally, to implement requirements 7. and 8., the nodes also maintain the information on the hotspots they met and the nodes they exchanged data with, thus creating a time depending local view on the network. In addition to the signaling of the data files they offer, the nodes also signal to hotspots they meet their connectivity history. Hotspots on the other side share this information with newly joining clients, thus creating a virtual view on the topology of the network. Using this information, nodes can decide in which direction, i.e. over which nodes, to route a message, multihop routing is supported. Step-by-step and in a delay tolerant manner, data packets can be passed on until they reach the destination. This approach for opportunistic, multihop routing is fulfilling requirement 6 and 7. In close cooperation with PowerFolder, we implemented this approach and evaluated the feasibility and performance of the approach. PowerFolder is the leading data synchronization solution in the German market and allows to synchronize data between desktop PCs. With our extension, it is also possible to synchronize data across smartphone and tablets directly, thus saving mobile data traffic available in the data plan and on the other side supporting a fast and one of its class transmission speeds. We implemented our approach in an Android app and performed various tests on the connectivity speed, the transmission times and the reliability of the solution. Our measurements show that transmission speeds of up to 60 Mbit/s are reached in the closest proximity and even around 10 Mbit/s are obtainable in 100 meters distance. The multihop functionality is working reliably with a decrease in the transmission speed related to the distance and the number of hops. We also experimented with direct hop-wise and end-to-end transmission encryption and the resulting security gain. The encryption speed is reasonable for small files. Please note that using the public key infrastructure which we established, it is both possible to encrypt data hop-by-hop but as well end-to-end. Our approach presents a novel networking approach for the future connected usage of smartphones and tables in the information-based society. Addressing one of the grand challenges of Qatar, we are optimistic that the approach is also very suitable in Qatar to support the societies' shift towards better usability and more secure and higher bandwidth data exchange.
-
-
-
Mixed Hybrid Finite Element Formulation for Subsurface Flow and Transport
Authors: Ahmad S. Abushaikha, Denis V. Voskov and Hamdi A. TchelepiWe present a mixed hybrid finite element formulation for modelling subsurface flow and transport. The formulation is fully implicit in time and employs tetrahedron elements for the spatial discretization for the subsurface domain. It comprises all the main physics that dictate the flow behaviour for subsurface flow and transport, since it is developed on, and inherits them from the Automatic Differentiation General Purpose Research Simulator (AD-GPRS) of Stanford University Petroleum Research Institute (SUPRI-B).
Traditionally, the finite volume formulation is the method employed for the computation of fluid dynamics and reservoir simulation, thanks to its local conservation of mass and energy, and straight-forward implementation. However, it requires the use of structural grids and fails in handling high anisotropy inside the material properties of the domain. Also, the method is of a low computational order; the computed local solution in the gird is piecewise constant.
Here, we use the mixed hybrid finite element formulation which is of high order and can handle the high anisotropy for the material properties. It solves the momentum and mass balance equations simultaneously, hence the name mixed. This strongly coupled scheme facilitates the use of unstructured grids which are important for modelling the complex geometry of the subsurface reservoirs. The Automatic Differentiation library of AD-GPRS automatically differentiates the computational variables needed for the construction the Jacobian matrix which consists of the momentum and mass balance unknowns and any presence of wells.
We use two types of tetrahedron elements, Raviart Thomas (RT0) and Brezzi-Douglas-Marini (BDM1), low and high order respectively. The RT0 has one momentum equation per interface, and the BDM1 has three momentum equations per interface assuring second-order flux approximation. Therefore, when compared to the finite volume approach where the Jacobian consists of the mass balance and well unknowns only, the mixed hybrid formulation will eventually have a larger Jacobian (a one order of magnitude for the high order element) which is computationally expensive. However, none the less, the formulation converges numerically and physically better than the finite volume approach, as we show.
The full system is solved implicitly in time to account for the non-linear behaviour of the flow and transport at the subsurface level which is highly pressure, volume, and temperature (PVT) dependent. Therefore, we make use of the already robust PVT formulations in AD-GPRS. We present a carbon dioxide (CO2) sequestration case for the Johnson formation and discuss the numerical and computational results.
This is of crucial important for Qatar and the Middle East where effective reservoir modelling and management requires a robust representation of the flow and transport at the subsurface level using state of the art formulations.
In the literature, Wheeler et al. (2010) employ a multipoint flux mixed finite element approach to eliminate the momentum balance equation form the Jacobian and substitute it by the well-established multipoint flux approximation (MPFA) in the mass balance equation. Since it is based on MPFA, it will still suffer from convergence issues where high anisotropy is present in the material properties. They have recently expanded their work to compositional modelling of fluid, Singh & Wheeler (2014), however they solve the system sequentially in time, where in our method we solve the system fully implicit in time. Sun & Firoozabadi (2009) solve the pressure implicitly and the fluid properties explicitly in time by further decoupling the mass balance equations which decreases the physical representation of the non-linear behaviour of the flow and transport at the subsurface level.
-
-
-
Develop a Global Scalable Remote Laboratory Based on a Unified Framework
Authors: Hamid Parsaei, Ning Wang, Qianlong Lan, Xuemin Chen and Gangbing SongInformation technology has had a great impact on education and research by enabling additional teaching and researching strategies. According to the 2014 Sloan Survey of Online Learning, the number of students who have taken at least one online course increased to a new total of 7.1 million during the Fall 2013 semester. Remote laboratory technology has made a great progress in the arena of online learning. Internet remote controlled experiments were previously implemented based on the unified framework at UH and TSU. However, end users of the framework were required to install the LabVIEW plug-in in the web browsers to support online usage of remote experiments and the framework only supported the desktop and laptop. In order to resolve the plug-in issues, a novel unified framework is proposed. This unified framework is based on the Web 2.0 and HTML 5 technology. As shown in Fig. 1, there are three layer applications in the unified framework: the client application layer, the server application layer and the experiment control layer. The client web application is based on HyperText Markup Language (HTML), Cascading Style Sheets (CSS) and JQuery/JQuery-Mobile JavaScript libraries. The Mashup technology is used for user interface implementation. The client web application can be run in most of current popular browsers such as IE, Firefox, Chrome, Safari etc. The server application is based on Web Service technology and directly built on top of MySQL database, Apache web server engine and Node.js web server engine. The server application utilizes JSON and Socket.IO, which is developed based on web socket protocol to implement the real-time communication between the server application and the client-web application (Rai, R.,). The server application runs on LANMP (Linux/Aparche/Node.js/MySQL/PHP) server. The experiment control application is based on the LabVIEW, and uses Socket.IO for real time communication with server application. The remote laboratory based on the novel unified framework is able to run on many different devices, such as desktop and laptop PCs, iPad, Android Pad, smart phone, etc., without software plug-ins. However, there are still some challenges remaining for remote laboratory development as follows: 1) How to access remote experiments installed at different laboratories through a single webpage? 2) How to manage the remote experiments at the different laboratory? 3) How to resolve the challenges of system safety issues? In order to resolve these challenges, a new scalable global remote laboratory was implemented at Texas A&M University at Qatar (TAMUQ) based on the improved novel unified framework. To integrate three different remote laboratories at TAMUQ, UH and TSU, the new global scalable remote laboratory architecture was designed and developed at TAMUQ. The labs operate with a unified scheduler, a federated authentication module and user management system. Meanwhile, a scalable server was also setup at TAMUQ to support expansion of the remote laboratory. Figure 2 shows the global scalable remote laboratory architecture. In this scalable remote laboratory, the laboratory center server at TAMUQ will consist of a scalable server connected the other two lab center server in UH and TSU. All of three laboratory center servers are based on Linux/Node.js/Apache/MySQL/PHP (LNAMP) architecture. Socket.io, which is a new real-time communication technology, was used to manage the experimental data and other user information (such as User profile, Login information, etc.) transmission in this global platform. The center server at TAMUQ was designated as the center proxy server for the scalable remote laboratory. With this global platform, terminal users can use all of remote experiments of these three universities via one website. With the new global scalable remote laboratory based on the novel unified framework, the scalable scheduler and federated authentication solution was designed and implemented. At the same time, issues with security control and management of experiment access were solved through taking the full advantage of the functionalities offered by the MD5 encryption and decryption algorithm based security management engine. As shown in Fig. 3, the new user interface was also developed and integrated into the new scalable remote laboratory. With the new global scalable remote laboratory, future teaching and learning activities at TAMUQ, UH and TSU will be improved. Meanwhile, the improved unified framework will significantly benefit remote laboratory development in future as well.
-
-
-
The Assessment of Pedestrian-Vehicle Conflicts at Crosswalks Considering Sudden Pedestrian Speed Change Events
Authors: Wael Khaleel Alhajyaseen and Miho IryoIntroduction
Pedestrians are vulnerable road users. In Japan, more than one-third of the fatalities in traffic crashes are pedestrians and most accidents occur as the pedestrians cross a road. To evaluate alternative countermeasures effectively, recently traffic simulation is considered as one of the powerful decision support tools (Shahdah et al. 2015). A very important requirement for a reliable utilization of traffic simulations for the safety assessments is the proper representation of road user behaviors at potential conflict areas. Severe conflicts usually occur when road users fail to predict other users’ decisions and properly react to it. The widely varying behaviors and maneuvers of vehicles and pedestrians may lead to misunderstanding their decisions, which can result in severe conflicts. So far, most existing studies assume constant walk speeds for pedestrians and complete obedience to traffic rules when crossing roads as if they are at walkways. However, it is known that pedestrian behave differently at crosswalks compared to other walking facilities such as sidewalks and walkways. Pedestrians tend to walk faster at crosswalks (Montufar et al. 2007). Furthermore, their compliance to traffic signals vary by traffic conditions and other factors (Wang et al. 2011). Although many studies have analyzed pedestrian behavior including speed at crosswalks, most of them are based on the average crossing speed without considering the speed profile of the crossing process and the variations within. Iryo-Asano et al. (2015) observed from empirical data that pedestrians may suddenly and significantly change their speed on crosswalks as a reaction to surrounding conditions. Such speed changes cannot be predicted by drivers, which can lead to safety hazards. A study of the speed change maneuvers is critical for representing the potential collisions in the simulation systems and evaluating the probability and severity of collisions reasonably. The objective of this study is to quantitatively model the pedestrian speed change maneuvers and integrate the model into traffic simulation for assessing traffic safety.
Pedestrian speed change events as critical maneuver Figure 1 shows an observed pedestrian trajectory with sudden speed change. If there is a turning vehicle approaching the conflict area, the driver may behave based on his expectation of pedestrian arrival time to the conflict area. If the pedestrian suddenly changes his/her speed close to the conflict area, drivers will not be able to predict the new arrival time, which might lead to severe conflicts. Figure 1 demonstrates a real observed example of such speed change. The pedestrian suddenly increased his speed at the beginning of the conflict area, which yielded to the early arrival to the conflict area by 2.0 seconds (Tdif) than expected time assuming that the pedestrian will continue with his/her speed. A turning vehicle cannot predict this early arrival if it exists at the same time. Furthermore, these 2 seconds are large in terms of collision avoidance Iryo-Asano et al. (2015) showed that timings and locations of pedestrians speed changes mainly occur at the entrance to the pedestrian-vehicle conflict area and 2) when there is a large gap between pedestrian's current speed and his/her necessary speed to complete crossing before the end of pedestrian flashing green interval. In this study, further in-depth analysis is conducted by combining the pedestrian data and the information of approaching vehicle trajectories to identify the influencing factors on pedestrians’ sudden speed change events. The probability of speed change is quantitatively modeled as functions of the remaining green time, the remaining length to cross, the current walking speed and other related variables.
Simulation integration for safety assessment
The proposed pedestrian maneuver model is implemented into an integrated simulation model by combining it with a comprehensive turning vehicle maneuver model (Dang et al. 2012). The vehicle maneuver model is dedicated to represent probabilistic nature of drivers’ reaction to road geometry and surrounding road users in order to evaluate user behavior upon traffic safety. It produces speed profiles of turning vehicles considering the impacts of geometry (i.e. intersection angles, setback distance of the crosswalks) and the gap between the expected arrival time of the vehicle and that of the pedestrians at the conflict area. The proposed model allows us to study the dependencies and the interactions between pedestrians and turning vehicle s at crosswalks. Using the integrated traffic simulation, pedestrian-vehicle conflicts are generated and surrogate safety measures, such as Post Encroachment Time and the vehicle speeds at conflict points, are estimated. These measures are used to evaluate the probability and severity of pedestrian-vehicle conflicts. To verify the characteristics of the simulated conflicts, estimated and observed surrogate safety measures at a selected signalized crosswalk are compared through statistical tests.
Conclusions
The consideration of sudden speed change behavior of pedestrians in the simulation environment generates more reliable and realistic pedestrian maneuvers and turning vehicle trajectories, which enables more accurate assessment of pedestrian-vehicle conflicts. This enables the assessment of improvements in the signal control settings and the geometric layout of crosswalks towards safer and more efficient operations. Furthermore, the model is useful for the real-time hazardous conflict event detection, which can be applied to the vehicle safety assistance systems.
This research is supported by JSPS KAKENHI Grant No. 15H05534. The authors are grateful to Prof. Hideki Nakamura and Ms. Xin Zhang for providing video survey data.
References
Dang M.T., et al. (2012). Development of a Microscopic Traffic Simulation Model for Safety
Assessment at Signalized Intersections, Transportation Research Record, 2316, pp. 122?131.
Iryo-Asano, M., Alhajyaseen, W., Zhang, X. and Nakamura, H. (2015) Analysis of Pedestrian Speed Change Behavior at Signalized Crosswalks, 2015 Road Safety & Simulation International Conference, October 6th–8th, Orlando, USA.
Montufar, J., Arango, J., Porter, M., and Nakagawa, S. (2007), The Normal Walking Speed of Pedestrians and How Fast They Walk When Crossing The Street, Proceedings of the 86th Annual Meeting of the Transportation Research Board, Washington D. C., USA.
Shahdah U,. et al. (2015), Application of traffic microsimulation for evaluating safety performance of urban signalized intersections, Transportation Research Part C, 60, pp. 96?104.
Wang, W., Guo, H., Gao, Z., and Bubb, H. (2011) Individual Differences of Pedestrian Behaviour in Midblock Crosswalk and Intersection, International Journal of Accident worthiness, Vol. 16, No. 1, pp. 1–9.
-
-
-
Bi-Text Alignment of Movie Subtitles for English-Arabic Statistical Machine Translation
Authors: Fahad Ahmed Al-Obaidli and Stephen CoxWith the increasing demand for access to content in foreign languages in recent years, we have also seen a steady improvement in the quality of tools that can help bridge this gap. One such tool is Statistical Machine Translation (SMT), which learns automatically from real examples of human translations, without the need for manual intervention. Training such a system takes just a few days, sometimes even hours, but requires a lot of sentences aligned to their corresponding translations, a resource known as a bi-text.
Such bi-texts contain translations of written texts as they are typically derived from newswire, administrative, technical and legislation documents, e.g., from the EU and UN. However, with the widespread use of mobile phones and online conversation programs such as Skype as well as personal assistants such as Siri, there is a growing need for spoken language recognition, understanding, and translation. Unfortunately, most bi-texts are not very useful for training a spoken language SMT system as the language they cover is written, which differs from speech in style, formality, vocabulary choice, length of utterances, etc.
It turns out that there exists a growing community-generated source of spoken language translations, namely movie subtitles. These come in plain text in a common format in order to facilitate rendering the text segments accordingly. The dark side of subtitles is that they are usually created for pirated copies of copyright-protected movies. Yet, their use in research is an exploitation of a “positive side effect” of Internet movie piracy, which allows for easy creation of spoken bi-texts in a number of languages. This alignment typically relies on a key property of movie subtitles, namely the temporal indexing of subtitle segments, among with other features.
Due to the nature of movies, subtitles differ from other resources in several aspects: they are mostly transcriptions of movie dialogues that are often spontaneous speech, which contains a lot of slang, idiomatic expressions, and also fragmented spoken utterances, with repetitions, errors and corrections, rather than grammatical sentences; thus, this material is commonly summarised in the subtitles, rather than being literally transcribed. Since subtitles are user-generated, the translations are free, incomplete and dense (due to summarization and compression) and, therefore, reveal cultural differences. Degrees of rephrasing and compression vary across languages and also depend on subtitling traditions. Moreover, subtitles are created to be displayed in parallel to a movie in order to be linked to the movie's actual sound signal. Subtitles also arbitrarily include some meta information such as the movie title, year of release, genre, subtitle author/translator details and trailers. They may also contain visual translation, e.g., into a sign language. Certain versions of subtitles are especially compiled for the hearing-impaired to include extra information about non-spoken sounds that are either primary, e.g., coughing, or secondary background noises, e.g., soundtrack music, street noise, etc. This brings yet another challenge to the alignment process: the complex mappings caused by many deletions and insertions. Furthermore, subtitles must be short enough to fit the screen in a readable manner and are only shown for a short time period, which presents a new constraint to the alignment of different languages with different visual and linguistic features.
The languages a subtitle file is available for differ from one movie to another. Commonly, the Arabic language, even though spoken by more than 420 million people worldwide, and being the 5th most spoken language worldwide, has relatively scarce online presence. For example, according to Wikipedia's statistics of article counts, Arabic is ranked 23rd. Yet, Web traffic analytics shows that search queries for Arabic subtitles and traffic from the Arabic region are among the highest. This increase in demand for Arabic content is not surprising with the recent dramatic economic and socio-political shift in the Arab World. On another note, Arabic, as a Semitic language, has a complex morphology, which requires special handling when mapping it to another language and therefore poses a challenge for machine translation.
In this work, we look at movie subtitles as a unique source of bi-texts in an attempt to align as many translations of movies as possible in order to improve English to Arabic SMT. Translating from English into Arabic is an underexplored translation direction and, due to the morphological richness of Arabic among with other factors, yields significantly lower results compared to translating in the opposite direction (Arabic to English).
For our experiments, we collected pairs of English-Arabic subtitles for more than 29,000 movies/TV shows, which is a collection that is bigger than any preexisting subtitle data set. We designed a sequence of heuristics to eliminate the inherent noise that comes with the subtitles' source in order to yield good quality alignment. We used time overlap to align the subtitles by utilising the time information provided within the subtitle files and measuring the time overlap. This alignment approach is language-independent and outperforms other traditional approaches such as the length-based approach, which relies on segment boundaries to match translation segments, as segment boundaries differ from one language to another, e.g., because of the need to fit the text on the screen.
Our goal was to maximise the number of aligned sentence pairs while minimising the alignment errors. We evaluated our models relatively and also extrinsically, i.e., by measuring the quality of an SMT system that used this bi-text for training. We automatically evaluated our SMT systems using BLEU, a standard measure for machine translation evaluation. We also implemented an in-house Web application tool in order to crowd-source human judgments comparing the SMT baseline's output and our best-performing system's output.
Our experiments yielded bi-texts of varied size and relative quality, which we used to train an SMT system. Adding any of our bi-texts improved the baseline SMT system, which was trained on TED talks from the IWSLT 2013 competition. Ultimately, our best SMT system outperformed the baseline by about two BLEU points, which is a very significant improvement, clearly visible to humans; this was confirmed in manual evaluation. We hope that the resulting subtitles corpus, the largest collected so far (about 82 million words), will facilitate research in spoken language SMT.
-
-
-
A Centralized System Approach to Indoor Navigation for the Visually Impaired
Authors: Alauddin Yousif Al-Omary, Hussain M. Al-Rizzo and Haider M. AlSabaghPeople who are Blind or Visually Impaired (BVI) have one goal in common: navigate through unfamiliar indoor environments without the intervention of a human guide. The number of blind people in the world is not accurately known at the present, however, based on the 2010 Global Data on Visual Impairments, World Health Organization, approximately 285 million people are estimated to be visually impaired worldwide: 39 million are blind and 246 have low vision, 90% in developing countries, with 82% of blind people aged 50 and above. Available extrapolated statistics about blindness in some countries in the Middle East show ∼102,618 in Iraq, ∼5,358 in Gaza strip, ∼22,692 in Jordan, ∼9,129 in Kuwait, ∼104,321 in Saudi Arabia, and ∼10,207 in the United Arab Emirates. These statistics reveal the importance of developing a useful, accurate, and easy to use navigation system to help this large population of disabled people in their everyday lives. Various commercial products are available to navigate BVI people in outdoor environments based on the Global Positioning System (GPS) where the receiver must has a clear view of the sky. Indoor geo-location, on the other hand, is much more challenging because objects surrounding the user can block or interfere with the GPS signal.
In this paper, we present a centralized wireless indoor navigation system to aid the BVI. The system is designed not only to accurately locate, track, and navigate the user, but to also find the safest travel path and easily communicate with the BVI. A centralized approach is adopted because of the lack of research in this area. Some proposed navigation systems require users to inconveniently carry heavy navigation devices; some require administrators to install a complex network of sensors throughout a building; and others are simply impractical in practice. The system consists of four major components: 1) Wireless Positioning Subsystem, 2) Visual Indoor Modeling Interface, 3) Guidance and Navigation Subsystem, and 4) Path-Finding Subsystem. The system is designed not only to accurately locate, track, and navigate the user, but to also find the safest travel path and easily communicate with the BVI.
A significant part of the navigation system is the virtual modeling of the building and the design of the path-finding algorithms, which will be the main focus of this research. Ultimately, the proposed system provides the design and building blocks for a fully functional package that can be used to build a complete centralized indoor navigation system, from creating the virtual models for buildings to tracking and interacting with BVI users over the network.
-
-
-
Evaluation of Big Data Privacy and Accuracy Issues
Authors: Reem Bashir and Abdelhamid Abdelhadi MansorNowadays a lot of massive data is stored and typically, the data itself contains a lot of non-trivial but useful information. Data mining techniques can be used to discover this information which can help the companies for decision-making. However, in real life applications, data is massive and is stored over distributed sites. One of my major research topics is to protect privacy over this kind of data. Previously, the important characteristics, issues and challenges related to management of the large amount of data has been explored. Various open source data analytics frameworks that deal with large amount of Data analytics workloads have been discussed. Comparative study between the given frameworks and suitability of the same has been proposed. Digital universe is flooded with huge amount of data generated by number of users worldwide. These data are of diverse in nature, come from various sources and in many forms. To keep with the desire to store and analyze ever larger volumes of complex data, relational databases vendors have delivered specialized analytical platforms that come in many shapes and sizes from software only to analytical services that run in third party hosted environments. In addition new technologies have emerged to address exploding volumes of complex data, including web traffic, social media content and machine generated data including sensor data, global positioning system data.
Big data is defined as large amount of data which requires new technologies and architectures so that it becomes possible to extract value from it by capturing and analysis process. Due to such large size of data it becomes very difficult to perform effective analysis using the existing traditional techniques. Big data has become a prominent research field, especially when it comes to decision making and data analysis. However, Big data due to its various properties like volume, velocity, variety, variability, value and complexity put forward many challenges. Since Big data is a recent upcoming technology in the market which can bring huge benefits to the business organizations, it becomes necessary that various challenges and issues associated in bringing and adapting to this technology are brought into light. Another challenge is that data collection may not have enough accuracy which will lead to a non-consistent analysis that can critically affect the decision based on this analysis. Moreover, it is clearly apparent that organizations need to employ data-driven decision making to gain competitive advantage. Processing, integrating and interacting with more data should make it better data, providing both more panoramic and more granular views to aid strategic decision making. This is made possible via Big Data exploiting affordable and usable Computational and Storage Resources. Many offerings are based on the Map-Reduce and Hadoop paradigms and most focus solely on the analytical side. Nonetheless, in many respects it remains unclear what Big Data actually is; current offerings appear as isolated silos that are difficult to integrate and/or make it difficult to better utilize existing data and systems. Since data is growing at a huge speed making it difficult to handle such large amount of data (Exabyte). The main difficulty in handling such large amount of data is because that the volume is increasing rapidly in comparison to the computing resources. The Big data term which is being used now a days is kind of misnomer as it points out only the size of the data not putting too much of attention to its other existing properties.
If data is to be used to make accurate decisions in time it becomes necessary that it should be available in accurate, complete and timely manner. This makes the data management and governance process bit complex adding the necessity to make Data open and make it available to government agencies in standardized manner with standardized APIs, metadata and formats thus leading to better decision making, business intelligence and productivity improvements.
This paper presents a discussion and evaluation for the most prominent techniques used in the processes of data collection and analysis in order to identify the privacy defects in them that affects the accuracy of big data. Depending on the results of this analysis, recommendations were provided for improving data collection and analysis techniques that will help to avoid if not all then most of the problems facing the use of big data in decision making. Keywords: Big Data, Big Data Challenges, Big Data Accuracy, Big Data Collection, Big Data Analytics.
-
-
-
Video Demo of LiveAR: Real-Time Human Action Recognition over Live Video Streams
By Yin YangWe propose to present a video demonstration of LiveAR at the ARC'16 conference. For this purpose, we have prepared three demo videos, which can be found in the submission files. These video demos show the effectiveness and efficiency of LiveAR running on video streams containing a diverse set of human actions. Additionally, the demo also exhibits important system performance parameters such as latency and resource usage.
LiveAR is a novel system for recognizing human actions, such as running and fighting, in a video stream in real time, backed by a massively-parallel processing (MPP) platform. Although action recognition is a well-studied topic in computer vision, so far most attention has been devoted to improving accuracy, rather than efficiency. To our knowledge, LiveAR is the first that achieves real-time efficiency in action recognition, which can be a key enabler in many important applications, e.g., video surveillance and monitoring over critical infrastructure such as water reservoirs. LiveAR is based on a state-of-the-art method for offline action recognition which obtains high accuracy; its main innovation is to adapt this base solution to run on an elastic MPP platform to achieve real-time speed at an affordable cost.
The main objectives in the design of LiveAR are to (i) minimize redundant computations, (ii) reduce communication costs between nodes in the cloud, (iii) allow a high degree of parallelism and (iv) enable dynamic node additions and removals to match the current workload. LiveAR is based on an enhanced version of Apache Storm. Each video manipulation operation is implemented as a bolt (i.e., logical operator) executed by multiple nodes, while the input frame arrive at the system via a spout (i.e., streaming source). The output of the system is presented on screen using FFmpeg.
Next we briefly explain the main operations in LiveAR. The dense point extraction bolt is a first step for video processing, which has two input streams: the input video frame and the current trajectories. The output of this operator consists of dense points sampled in the video frame that are not already on any of the current trajectories. In particular, LiveAR partitions the frame into different regions, and assigns one region to a dense point evaluator, each running in a separate thread. Then, the sampled coordinates are grouped according to the partitioning, and routed to the corresponding dense point evaluator. Meanwhile, coordinates on current trajectories are similarly grouped by a point dispatcher, and routed accordingly. Such partitioning and routing minimizes network transmissions as each node is only fed the pixels and trajectory points it needs.
The optic flow generation operator is executed by multiple nodes in parallel similarly to the dense point extractor. An additional challenge here is that the generation of optic flows involves (i) comparing two frames at consecutive time instances and (ii) multiple pixels in determining the value of the flow in each coordinate. (i) means that the operator is stateful, i.e., each node must store the previous frame and compare with the current one. Hence, node additions and removals (necessary for elasticity) become non-trivial as a new node does not immediately possess the necessary states to work (i.e., pixels on the previous frame) on its inputs. Regarding (ii), each node cannot simply handle a region in the frame, as is the case in the dense point extractor, as the computation at one coordinates relies on the surrounding pixels. Our solution in LiveAR is to split the frame into overlapping patches; each patch contains a partition of the frame, as well as the pixels surrounding the partition. This design effectively reduces the amount of network transmissions, thus improving system scalability.
Lastly, the trajectory tracking operator involves three inputs: the current trajectories, the dense points detected from the input frame, and the optic flows of the input frame. The main idea of this operator is to “grow” a trajectory, either an existing one or a new one starting at a dense point, by adding one more coordinate computed from the optic flow. Note that it is possible that the optic flow indicates that there is no more coordinate on this trajectory in the input frame, ending the trajectory. The parallelization of this operator is similar to that of the dense point extractor, except that each node is assigned trajectories rather than pixels and coordinates. Grouping of the trajectories is performed according to their last coordinates (or the newly identified dense points for new trajectories).
-
-
-
FPGA Based Image Processing Algorithms (Digital Image Enhancement Techniques) Using Xilinx System Generator
More LessFPGAs has many significant features that serves as a platform for processing real time algorithm. It gives substantially higher performance over programmable Digital Signal Processor (DSPs) and microprocessor. At present, the use of FPGA in research and development of applied digital systems for specific tasks are increasing. This is due to the advantages FPGAs has over other programmable devices. These advantages are high clock frequency, high operations per second, code portability, code libraries reusability, low cost, parallel processing, Capability of interacting with high or low interfaces, security and Intellectual Property (IP).
This paper presents concept of hardware digital image processing algorithms using field programmable gate array (FPGA). It focus on implementation an efficient architecture for image processing algorithms like image enhancement (point processing techniques (by using fewest possible System Generator Blocks. In this paper, Modern approach of ‘Xilinx System Generator’ (XSG) is used for system modeling and FPGA programming. Xilinx System Generator is a tool of matlab that generates bit stream file (*.bit), Netlist, timing and power analysis. Performance of these architectures implemented in FPGA card XUPV5-LX110T.
-
-
-
Alice-based Computing Curriculum for Middle Schools
Authors: Saquib Razak, Huda Gedawy, Don Slater and Wanda DannAlice is a visualization software for introducing computational thinking and programming concepts in the context of creating 3D animations. Our research aims to introduce computational thinking and problem solving skills in the middle schools in Qatar. To make this aim accessible, we have adapted the Alice software for a conservative Middle Eastern culture, developed curricular materials, and provided professional development workshops for teachers and students in the Middle East. There is a trend for countries, to evaluate curriculum from other cultures, and then try to bring the successful curriculum to their own school systems. This culture is a result of societies beginning to realize the importance of education and knowledge. Qatar's efforts towards building knowledge-based society and upgrading their higher education infrastructure are proofs of this realization. The challenge is to recognize that although a strong curriculum is necessary, simply porting a successful curriculum to a different environment is not sufficient to guarantee success. Here we share our attempt to take a tool with associated curriculum that has been very successful in several countries in the West, and apply it in an environment with very different cultures and social values.
The Alice ME project is targeted at middle school (grades 6–8) teachers and students in the Middle East. The overall goal of the project is to adapt the Alice 2 software to the local cultures, develop new instructional materials appropriate for local systems, and test the effectiveness of the Alice approach at the middle school level. The “Alice approach” – using program visualization to teach/learn analytic, logical, and computational thinking, problem solving skills and fundamental programming concepts in the context of animation – remains the same.
In the formative phase of this project, our goal was to understand the environment and local culture and evaluate the opportunities and challenges. The lessons learned in this phase are being used to formulate the future direction of our research. Although the Middle Eastern countries are rapidly modernizing, the desire to maintain traditional norms is strong. For this reason, we compiled two lists of models. One list was of existing models in the Alice gallery that are not appropriate for the Middle Eastern culture. Qatar (and Middle East in general) is a religious society that follows conservative norms in dress for both men and women. The second was a list of models that would be interesting and relevant to the Qatari society. These two lists helped us determine those models that might be modified, removed, or added to the gallery. We found that Qatar is a cultural society with a lot of emphasis on local customs. Local hospitality, religion, and traditional professions like pearl diving, fishing, and police officers have special place in society. We also discovered that people in general and boys in particular, have a special respect for camels and the desert.
We created the curriculum in collaboration with one private school and the Supreme Education Council. Creating artifacts in isolation and expecting the educational systems to adopt them is not a prudent approach. Due to this collaboration, we learned that a majority of existing ICT and computing curriculum is based on step-by-step instructions that students are expected to follow and reproduce. There is a lack of stress on student learning, creativity, and application.
Most ICT teachers in Qatar, both in public and private schools, are trained ICT professionals. At the same time, most of these teachers are not familiar with program visualization tools such as Alice and have not taught fundamental programming concepts in their classes. As a result, the need for professional development workshops is urgent. We have conducted several workshops for teachers to help them use Alice during the pilot study of the curriculum. During these workshops, we focus on two main concepts – learning to use Alice as a tool, and learning to teach computational thinking using Alice.
We have piloted the curriculum, instructional materials, and the new 3-D models in the Alice gallery for middle school students in one private English school and two public Arabic schools. The pilot test involved more than 400 students in the three schools combined. During the pilot testing, we conducted a survey to obtain initial feedback regarding the 3D models from the Middle East gallery (students have access to all models that are part of Alice's core gallery). Through these surveys, we learned that those objects that students use in everyday life were more popular when it came to using models in Alice.
As part of the curriculum to teach computational thinking using Alice as a tool, we have created several artifacts which are made available to the schools. These items include:
Academic plan for the year
Learning out comes for each lecture, PowerPoint presentation, class exercises, and assessment questions
Student textbook – One English book for grade 8, one Arabic textbook for grade 8 and one Arabic textbook for grade 11.
One of the most important skills that is essential for building the future generation is critical thinking. Although, we are currently only looking at the acceptability of the newly created 3-D models and usability of our curriculum and instructional material, we are still curious about the effectiveness of this curriculum in teaching computational thinking. We analyzed the results of an exam conducted at a local school and it was observed that students in grade 7 with Alice based curriculum performed better than those in grade 9 on the same exam. This exam was designed to measure the critical thinking skill in problem solving without any reference to Alice. We hope that this result is directly related to the students' experience with Alice, as it works on making the students think about the problem from different perspectives. We acknowledge that still more formal work needs to be done in order to support our hypothesis.
This academic year (2015–2016), Alice based curriculum is being used in Arabic in six independent schools and in English in four private English schools. There are more than 1400 students currently studying this content.
-
-
-
Towards a K-12 Game-based Educational Platform with Automatic Student Monitoring: “INTELLIFUN”
Authors: Aiman Erbad, Sarah Malaeb and Jihad Ja'amSince the twenty-first century, digital technologies are increasingly supporting teaching and learning activities. Because learning is effective when it starts early, advanced early years' educational tools are highly recommended to help new generations gain the necessary skills to successfully build opportunities and progress in life. With all the digital learning advances, there are still many problems that teachers, students and parents are facing. Students' learning motivation, problem solving ability remain weak while working memory capacity is found low for children under 11 years old which cause some learning difficulties, such as developmental coordination disorder, mathematics calculation, and language impairments. The latest PISA, Programme for International Assessment, shows that Qatar has seen the lowest scores compared to other countries with similar condition in mathematics, sciences and reading performance and is ranked 63rd of 65 countries involved, even though the Qatari GDP, General government expenditures, is high (OECD 2012).
Another problem affecting the educational experience for young children is family engagement. Parents need to be more involved in the learning process and have quick and timely detailed feedback about their children's progress in different topics of study. In fact, the schools days are limited and parents can play an important role in improving their children progress in learning and understanding concepts. The traditional assessment tools provide global grading usually by topics of study (e.g., Algebra). Parents need a grading system by learning skills (e.g., addition facts to 10, solving missing-number problems, subtraction of money) to have a clear view about the specific skills that their children need to improve. Finally, teachers need also an automated skills-based students monitoring tool to observe students' progress in correspondence with the learning objectives to focus on personalized tutoring tactic and take accurate decisions. Such a tool allows the teachers to focus more on the students' weaknesses and take the necessary actions to overcome with these problems.
Recent studies showed that students can become more motivated to learn with game-based learning tools. These interactive elements facilitate problem solving and make learning new concepts easier and encourage the students to work harder at school and also at home. Active learning using game-based model guarantees long-term retention of information which help the students increase their exam scores while acquiring the needed skills appropriately. We have conducted a survey and analyzed the features of 31 leading existing technologies in the digital learning industry. We found that only 21 of them offer educational games, 22 are dedicated for elementary age range, 15 offer digital resources to support mathematics and sciences, 11 consider some digital tools for assessment to test children skills and 6 include automated progress reporting engine where most of them need manual data entry support from teachers. There is a need for a complete solution of a game-based learning platform with automatic performance and progress reporting without any manual intervention, and in particular customized to fit with elementary schools curriculum standards.
We developed an educational platform called ‘IntelliFun’ that uses educational games to automatically monitor and asses the progress of elementary schools children. It can be applied to wider scope of course using outcome-based learning using games. Our intelligent game-based ‘IntelliFun’ platform provides a potential solution for many serious issues faced in education. Its entertainment gaming features improve students' learning motivation, problem solving ability and working memory capacity. In parallel, its students' performance monitoring features empower family engagement. Having these features integrated in one technology, makes ‘IntelliFun’ a novel creative solution in digital education.
To generate students' outcomes while playing the games, we have to use an effective technology to relate the curriculum standards and the learning objectives with the game worlds' content (i.e., scenes and activities). The technology we have used is an ontology-based approach. We have designed a new ontology model to map the programs curriculums and learning objectives with the flow-driven game worlds' elements. The children' performance is evaluated through the ontology using information extraction with an automated reasoning mechanism that is guided by a set of inference rules. Technically, using ontologies in the field of education and games is very challenging and our data model forms a novel solution to two issues:
• The complexity of designing educational data models where learning objectives and curriculum standards are matched and incorporated in serious games, and
• The complexity of providing advanced reasoning over the data.
This allows the fusion of many challenging technologies: digital education, semantic web, games, monitoring systems and artificial intelligence.
Our work is deeply rooted in the state of the art in educational games and digital education systems. The curriculum ontology was inspired by the British Curriculum Ontology (BBC 2013). The instances related to learning objectives are extracted from the elementary curriculum standards of Supreme Education Council of Qatar. Ontology model in games follows story-based scenarios described with Procedural Content Generation (Hartsook 2011) and HoloRena (Juracz 2010). We used the trajectory trace ontology described in STSIM (Corral 2014) to design the student monitoring ontology. To evaluate student's performance, we used inference rules based-reasoning engine to query correct, incorrect and incomplete actions performed by the player as described in Ontology-based Information Extraction (Gutierrez 2013). To measure the learner's performance, certain key indicators should feed the reasoning engine which executes appropriate calculation methods.
The platform is implemented in a 3-tier architecture where mobile game applications are used. These games can query and update the ontology in real time through a web service by invoking data management, reasoning, monitoring and reporting operations using Apache Jena Ontology API. The platform can be used to dynamically generate the content of the games based on the children' preferences and acquired knowledge. The platform monitoring features allow the teachers to focus on the children' achievement of every learning objective and empower also the parents' engagement in their children's learning experience. In fact, they can follow up the children and know their weaknesses and strengths. ‘IntelliFun’ is used to improve the children's learning outcomes and keep them motivated while playing games.
We aim to start testing our platform with real users. We will use the mathematics curriculum for grade 1 as case study. Our user study will include students, parents and teachers who will answer an evaluating questionnaire after testing the technology. This will help us evaluate the efficacy of the platform and ascertain its benefits by analyzing its impacts in improving students' learning experience. An interesting research direction to take into consideration in future work is the use of data mining techniques in the reasoning engine to evaluate students' performance with complex performance key indicators. We can also consider dynamic generation of game worlds' content based on students' preferences and acquired learning skills.
-
-
-
Enhancing Information Security Process in Organisations in Qatar
More LessDue to the universal use of technology and its pervasive connection to the world, organisations have become more exposed to frequent and various threats (Rotvold, 2008).Therefore, organisations today are giving more attention to information security as it has become a vital and challenging issue. Mackay (2013) noted that the significance of information security, particularly information security policies and awareness, is growing due to the increasing use of IT and computerization. Accordingly, information security presents a key role in the internet era of technology. Gordon & Loep (2006) stated that information security involves a group of actions intended to protect information and information systems. It involves software, hardware, physical security and human factors, where each element has its own features. Information security not only secures the organisation's security but the complete infrastructure that enables the information's use. Organisations are facing an increase in daily security breaches, especially information that is more accessible to the public as the threat becomes greater. Therefore security requirements need to be tightened.
Information security policies control employees' behavior as well as securing the use of hardware and software. Organisations benefit from implementing information security policies as it helps them to classify their information assets and define the importance of the information assets to the organisation (Canavan, 2003). Information security policy as a number of principles, regulations, methodologies, procedures and tools created to secure the organisation from threats. Boss and Kirsch (2007) stated that employees' compliance with information security polices has become an important socio-organizational resource. Information security policies are applied in organisations to provide the employees with guidelines to guarantee information security.
Herold (2010) expressed the importance for organisations to have constant training programmes and educational awareness to attain the required result from the implementation of an information security policy. Security experts' emphasise the importance of security awareness programmes and how they improve information security as a whole. Nevertheless, implementing security awareness in organisations is a challenging process as it requires actively interacting with an audience that usually does not know the importance of information security (Manke, 2013). Organisations tend to use advanced security technologies and constantly train their security professionals, while paying little attention on enhancing the security awareness of employees and users. This makes employees and users the weakest link in any organization (Warkentin & Willison, 2009).
In the last ten years, the state of Qatar has witnessed remarkable growth and development of its civilization, having embraced information technology as a base for innovation and success. The country has perceived tremendous improvement in the sectors of health care, education and transport (Al-Malki, 2015). Information technology plays a strategic role in building the country's knowledge based economy. Due to the country's increasing use of internet and being connected to the global environment, Qatar needs to adequately address the global threats arising from the internet. The global role of Qatar in world politics has led Qatar to not just face the traditional threats from hackers, but more malicious performers such as terrorists, organized criminal networks and foreign government spying. Qatar has faced a lot of discomfort with some countries which try to breach the county's security. Qatar Computer Emergency Response Team (Q-CERT) who is responsible of addressing the state's Information Security needs stated “As Qatar's dependence on cyberspace grows, its resiliency and security become even more critical, and hence the needs for a comprehensive approach that addresses this need” (Q-CERT, 2015). Therefore Q-CERT established National Information Assurance policy (NIA), which is an information security policy designed to help both government and private sectors in Qatar, to protect their information and enhance their security. Nevertheless the NIA policy has not been implemented still in any organization in Qatar. This is due to the barriers and challenges of information security in Qatar such as culture and awareness, which make the implementation of information security policies a challenging approach.
As a result, the scope of this research is to investigate information security in Qatar. There are many solutions for information security, some are technical and others are non-technical, such as security policies and information security awareness. This research focusses on enhancing information security through non-technical solutions, in particular information security policy. The aim of this research is to enhance information security in organizations in Qatar by developing a comprehensive Information Security Management System (ISMS) that considers the country-specific and cultural factors of Qatar. ISMS is a combination of policies and frameworks which ensure information security management (Rouse, 2011). This information security management approach is unique to Qatar as it considers Qatar culture and country specific factors. Although there are a lot of international information security policies available, such as ISO27001 but this research shows that sometimes these do not address the security needs particular to the culture of the country. Therefore there was a need to define a unique ISMS approach for Qatar.
To accomplish the aim of this research the following objectives must be achieved.
1. To review literature on information security in general and in Qatar in particular.
2. To review international and local information security standards and policies.
3. To explore the NIA policy in Qatar and compare it with others in the region and internationally.
4. To define problems with implementing information security policies and NIA policy in particular in organisations in Qatar.
5. To provide recommendations for the new version of the NIA policy.
6. To assess the awareness of employees on information security.
7. To assess the information security process in organisations in Qatar.
8. To identify the factors which affect information security in Qatar including culture and country specific factors.
9. To propose an ISMS for Qatari organisations taking into consideration the above factors.
10. To define a process for organisations to maintain the ISMS.
11. To evaluate the effectiveness of the proposed ISMS.
To achieve the aim of this research, different research methodologies, strategies and data collection methods will be used, such as literature review, surveys, interviews and case study. The research undergoes three phases, currently the researcher has completed phase one of the research which analyses the field of information security and highlights the gaps in the literature that can be investigated further in this research. It also examines the country factors that affect information security and the implementation of such information security policies. While undertaking interviews with experts in the field of information technology, information security, culture and law, to identify the situation of Information Security in Qatar, and the factors which might affect the development of Information Security in Qatar including the cultural effect, legal and political issues. In the following two years, the researcher will complete phase two and three of the research. During phase two the researcher will measure the awareness of employees and their knowledge of information security and information security policies in particular. The finding will help the researcher in completing phase three which involves investigating further the NIA policy and a real implementation of ISMS in an organisation in Qatar, and analyses the main findings to finally providing recommendations for improving NIA policy.
In conclusion, the main contribution of this research is to investigate the NIA policy and the challenges facing its implementation, and then define an ISMS process for the policy to assist organisations in Qatar in implementing and maintaining the NIA policy. The research is valuable since it will perform the first real implementation of the NIA policy in an organisation in Qatar taking advantage of the internship the researcher had with ICT. The research will move the policy from paper-based form into a real ISMS system and oversees it in reality in one of the organizations.
Information security, National Information Assurance policy, Information Security Management System, Security Awareness, Information Systems
References
Rotvold, G. (2008). How to Create a Security Culture in Your Organization. Available: http://content.arma.org/IMM/NovDec2008/How_to_Create_a_Security_Culture.aspx. Last accessed 1st Aug 2015.
Manke, S. (2013). The Habits of Highly Successful Security Awareness Programs: A Cross-Company Comparison. Available: http://www.securementem.com/wp-content/uploads/2013/07/Habits_white_paper.pdf. Last accessed 1st Aug 2015.
Al-Malki. (2015). Welcome to Doha, the pearl of the Gulf. Available: http://www.itma-congress-2015.com/Welcome_note_2.html. Last accessed 4th May 2015.
Rouse, M. (2011). Information security management system (ISMS). Available: http://searchsecurity.techtarget.in/definition/information-security-management-system-ISMS. Last accessed 22th Aug 2015.
Q-CERT. (2015). About Q-CERT. Available: http://www.qcert.org/about-q-cert. Last accessed 1st Aug 2015.
Warkentin, M., and Willison, R. (2009). “Behavioral and Policy Issues in Information Systems Security: The Insider Threat,” European Journal of Information Systems (18:2), pp. 101–105.
Mackay, M. (2013). AN EFFECTIVE METHOD FOR INFORMATION SECURITY AWARENESS RAISING INITIATIVES. International Journal of Computer Science & Information Technology. 5 (2), p 63–71.
Gordon, L. A. & Loep, M. P. (2006). Budgeting Process for Information Security Expenditures. Communications of the ACM. 49 (1), p 121–125.
Herold. R (2010). Managing an Information Security and Privacy Awareness and Training Program. New York: CRC Press.
Boss, S., & Kirsch, L. (2007). The Last Line of Defense: Motivating Employees to Follow Corporate Security Guidelines. International Conference on Information Systems. unknown (unknown), p 9–12.
Canavan, S. (2003). An Information Security Policy Development Guide for Large Companies. SANS Institute.
-
-
-
Visible Light Communication for Intelligent Transport Systems
Authors: Xiaopeng Zhong and Amine BermakIntroduction
Road safety is a world-wide health challenge that is of great importance in Qatar. According to WHO [1], global traffic fatalities and injuries are in the millions per year. Qatar has one of the world's highest rate of traffic fatalities, which causes more deaths than common diseases [2]. Traffic congestion and vehicle fuel utilization are two other major problems. Integrating vehicle communication into intelligent transport systems (ITS) is important as it will help improve road safety, efficiency and comfort by enabling a wide variety of transport applications. Radio frequency communication (RFC) technologies do not meet the stringent transport requirements due to spectrum scarcity, high interference and lack of security [3]. In this work, we propose an efficient and low-cost visible light communication (VLC) system based on CMOS transceivers for vehicle-to-vehicle (V2V) and infrastructure-to-vehicle (I2V) communication in ITS, as a complementary platform to RFC.
Objective
The proposed VLC system is designed to be low cost and efficient, supporting various V2V and I2V communication scenarios as shown in Fig. 1. The VLC LED transmitters (Tx) are responsible for both illuminating and information broadcasting. They are designed to support various existing transport infrastructures (such as street lamps, guideboards and traffic lights) as well as vehicle lights, with low cost and complexity. The receivers (Rx) will be available on both the front and back sides of vehicles with both vision and communication capabilities. Robustness of communication is enhanced by the added vision capability.
System implementation
The VLC system implementation in Fig. 2 is an optimized joint design of the transmitter, receiver and communication protocol. The LED transmitter will focus on the design of LED driver with efficient combination of illumination and communication modulation schemes. Light sensor is integrated to provide adaptive feedback for better power efficiency. Polarization techniques are utilized to cancel background light so as to not only enhance image quality but also improve robustness of VLC, as shown in Fig. 3.(a) [4]. A polarization image sensor using liquid crystal micro-polarimeter array has been designed as illustrated in Fig. 3. (b). The CMOS visible light receiver will be designed based on traditional CMOS image sensor but with innovative architecture specifically for V2V and I2V VLC. It features dual readout channels, namely, a compressive channel for image capture and a high-speed channel for VLC. Novel algorithms for detection and tracking are used to improve communication speed, reliability and security. Compressive sensing is applied for image capture. The compression is facilitated by a novel analog-to-information (AIC) conversion scheme which leads to significant power savings in image capture and processing. A prototype AIC based image sensor has been successfully implemented as shown in Fig. 4 [5]. A VLC protocol is specifically tailed for V2V and I2V based on the custom transceivers. The PHY layer is designed based on MIMO OFDM and the MAC layer design is based on dynamic link adaption. The protocol is to be an extension and optimization of IEEE 802.15.7 standard for V2V and I2V VLC. A preliminary prototype VLC system has been designed to verify the feasibility. A Kbps-level VLC channel has been achieved under illumination levels from tens to hundreds of lux. It's anticipated better improvement will be obtained with further research using the novel techniques described above.
Conclusion
An efficient and low-cost visible light communication system is proposed for V2V and I2V VLC, featuring low cost and power-efficient transmitter design, dual-readout (imaging and VLC) receiver architecture, fast detection and tracking algorithms with compressive sensing, polarization techniques and specific communication protocol.
References
[1] Global status report on road safety 2013, World Health Organization (WHO).
[2] Sivak, Michael, “Mortality from road crashes in 193 countries”, 2014.
[3] Lu, N.; Cheng, N.; Zhang, N.; Shen, X.S.; Mark, J.W., “Connected Vehicles: Solutions and Challenges,” IEEE Internet of Things Journal, vol. 1, no. 4, pp. 289–299, Aug. 2014.
[4] X. Zhao, A. Bermak, F. Boussaid and V. G. Chigrinov, “Liquid-crystal micropolarimeter array for full Stokes polarization imaging in visible spectrum”, Optics Express, vol. 18, no. 17, pp. 17776–17787, 2010.
[5] Chen, D.G.; Fang Tang; Law, M.-K.; Bermak, A., “A 12 pJ/Pixel Analog-to-Information Converter Based 816 × 640 Pixel CMOS Image Sensor,” IEEE Journal of Solid-State Circuits, vol. 49, no. 5, pp. 1210–1222, 2014.
-
-
-
A General Framework for Designing Sparse FIR MIMO Equalizers Based on Sparse Approximation
Authors: Abubakr Omar Al-Abbasi, Ridha Hamila, Waheed Bajwa and Naofal Al-DhahirIn broadband communications, the long channel delay spread, defined as the duration in time, or samples, over which the channel impulse response (CIR) has significant energy, is too long and results in a highly-frequency-selective channel frequency response. Hence, a long CIR can spread over tens, or even hundreds, of symbol periods and causes impairments in the signals that have passed through such channels. For instance, a large delay spread causes inter-symbol interference (ISI) and inter-carrier interference (ICI) in multi-carrier modulation (MCM). Therefore, long finite impulse response (FIR) equalizers have to be implemented at high sampling rates to avoid performance degradation. However, the implementation of such equalizers is prohibitively expensive as the design complexity of FIR equalizers grows proportional to the square of the number of nonzero taps in the filter. Sparse equalization, where only few nonzero coefficients are employed, is a widely-used technique to reduce complexity at the cost of a tolerable performance loss. Nevertheless, reliably determining the locations of these nonzero coefficients is often very challenging.
In this work, we first propose a general framework that transforms the problem of design of sparse single-input single-output (SISO) and multiple-input multiple-output (MIMO) linear equalizers (LEs) into the problem of sparsest-approximation of a vector in different dictionaries. In addition, we compare several choices of sparsifying dictionaries under this framework. Furthermore, the worst-case coherence of these dictionaries, which determines their sparsifying effectiveness, are analytically and/or numerically evaluated. Second, we extend our framework to accommodate SISO and MIMO non-linear decision-feedback equalizers (DFEs). Similar to the sparse FIR LEs design problem, the design of sparse FIR DFEs can be cast into one of sparse approximation of a vector by a fixed dictionary whose solution can be obtained by using either greedy algorithms, such as Orthogonal Matching Pursuit (OMP), or convex-optimization-based approaches, with the former being more desirable due to its low complexity. Third, we further generalize our sparse design framework to the channel shortening setup. Channel shortening equalizers (CSEs) are used to ensure that the cascade of a long CIR and the CSE is approximately equivalent to a target impulse response (TIR) with much shorter delay spread. Channel shortening is essential for communication systems operating over highly-dispersive broadband channels with large channel delay spread. Fourth, as an application of recent practical interest for power-line communication (PLC) community, we consider channel shortening for the impulse responses of medium-voltage power-lines (MV-PLs) with length of 10 km and 20 km to reduce the cyclic prefix (CP) overhead in orthogonal frequency-division multiplexing (OFDM) and, hence, improves the data rate accordingly. For all design problems, we propose reduced-complexity sparse FIR SISO and MIMO linear and non-linear equalizers by exploiting the asymptotic equivalence of Toeplitz and circulant matrices, where the matrix factorizations involved in our design analysis can be carried out efficiently using the fast Fourier transform (FFT) and inverse FFT with negligible performance loss as the number of filter taps increases.
Finally, the simulation results show that allowing for a little performance loss yields a significant reduction in the number of active filter taps, for all proposed LEs and DFEs design filters, which in turn results in substantial complexity reductions. The simulation results also show that the CIRs of MV-PLs with length of 10 km and 20 km can be shortened to fit within the broadband PLC standards. Additionally, our simulations validate that the sparsifying dictionary with the smallest worst-case coherence results in the sparsest FIR filter design. Furthermore, the numerical results demonstrate the superiority of our proposed approach compared to conventional sparse FIR filters in terms of both performance and computational complexity. Acknowledgment: This work was made possible by grant number NPRP 06-070-2-024 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.
-
-
-
Novel Vehicle Awareness Measure for Secure Road Traffic Safety Applications
Authors: Muhammad Awais Javed and Elyes Ben HamidaFuture intelligent transport systems (ITS) are envisaged to offer drivers with a safer and comfortable driving experience by using wireless data exchange between vehicles. A number of applications could be realized with the increased vehicle vision and awareness provided by this technology known as Vehicular Ad hoc Network (VANETs). These applications include cooperative awareness, warning notification, safe lane change and intersection crossing, intelligent route selection, traffic management, parking selection, multi-player games and internet browsing.
The success of VANETs and its proposed applications depend on secure and reliable message transmission between the vehicles. Every vehicle broadcasts periodic safety messages to the neighborhood traffic to inform about its presence. This safety message contains vehicle's mobility information including its location, speed, direction, heading etc. Based on these safety messages, vehicles develop a local dynamic map (LDM) that provides them a complete description of the surrounding traffic. Using LDM, vehicles could look beyond line of sight and make safe and intelligent driving decisions.
An increased level of vehicle safety awareness is the primary goal for road safety applications. An accurate measure of this awareness is critical to evaluate impact of different parameters such as security, vehicle density etc. on vehicle safety and application quality of service. A precise and correct metric for safety awareness of vehicles should take into account the knowledge of vehicle's surrounding and accuracy of received information in CAM and LDM. Existing metrics in the literature utilize quantitative measure of awareness such as packet delivery ratio and do not consider accuracy and fidelity of received information in the LDM. Due to GPS error and outdated information in the LDM, vehicles could have a reduced level of awareness resulting in dissemination of false positives and false negatives that could badly impact road safety applications.
In this paper, we propose two novel metrics for evaluating vehicle safety awareness. These metrics start by using our proposed vehicle heading based filtering mechanism to only consider the critical neighbors in the surrounding (i.e., the ones that are moving towards a vehicle and have a chance to collide with it) for calculating awareness. The first metric known as Normalized Error based Safety Awareness Level (SAL) calculates awareness by measuring the number of neighbors a vehicle has successfully discovered in its LDM and a normalized distance error that is calculated based on actual position of each neighbor and its position information that is available in the LDM. By considering the position error in the information contained in the LDM, vehicles accurately measure their awareness levels.
To further improve the above safety awareness metric, we propose a weighted Normalized Error based Safety Awareness Level (wSAL) metric that assigns higher weight to error coming from neighbor vehicles that are nearby using a sigmoid function. Since position error of a closer neighbor is more critical in safety applications, vehicle awareness level could be more accurately measured by allocating a higher importance to them.
We developed a simulation model using NS-3 network simulator and SUMO traffic simulator to generate realistic road traffic scenario at different vehicle densities. Simulation results verify that the existing metrics provide optimistic results for vehicle awareness and our proposed metrics improve the measure of awareness. This leads to a better performance evaluation of safety applications.
-
-
-
Energy Efficient Antenna Selection for a MIMO Relay Using RF Energy Harvesting
Authors: Amr Mohamed and Islam SamyDue to rapid growth in traffic demands and the number of subscribers, the transmit energy consumption becomes critical, both environmentally and economically. Increasing energy efficiency for wireless networks is the main goal for the 5G network research. The research community has proposed promising solutions supporting green communication techniques. However, energy efficiency can also be enhanced in a different way. We can get energy from the renewable sources, which can compensate (totally or partially) the traditional power consumption from the power grid. Energy harvesting has emerged as a promising technique which helps to increase the sustainability of wireless networks. In this paper, we investigate energy efficient antenna selection schemes for a MIMO relay powered by a hybrid energy source (from grid or through RF energy harvesting). We try to utilize the large number of antennas efficiently for both data decoding and energy transfer. Then, we formulate an optimization problem and provide the optimum antenna selection scheme such that the joint power consumption (source and relay power) is minimized while meeting the rate requirements. The problem is categorized as a mixed non-linear integer and non-convex, i.e., prohibitively complex. We propose two special cases of the general problem, Fixed Source Power Antenna Selection (FSP-AS), in which we assume a fixed source power and control the antenna selection only, and All Receive Antenna Selection (AR-AS), in which we choose to turn all receiving antennas ON. Finally, we introduce two less complex heuristics, Decoding Priority Antenna Selection (DP-AS) and Harvesting Priority Antenna Selection (HP-AS). Finally we compare our work with the Generalized Selection Combiner (GSC) scheme used in some previous works.
The main contributions of our work can be summarized as follows:
(1) We introduce the energy harvesting technique as an effective way to improve the energy efficiency by using it as a substitute for the grid energy.
(2) In addition to the transmitted energy, we model the circuit power as an important part of the total energy consumption which can affect the energy efficiency.
(3) We make a possibility to turn each antenna ON or OFF individually, so we can turn off only the antennas we don't need to save the energy as much as possible.
(4) We introduce two special case schemes, each of them care about a special type of energy consumption, FSP-AS scheme cares more about the circuit energy, while the
AR-AS concentrates mainly on the transmitted energy.
(5) We also propose two heuristics to accommodate the complexity of the target problem. We evaluate the performance for the proposed schemes numerically. Our key performance indicator (KPI) is the joint power consumed in both the source and the relay. The simulation results show the gain of our optimal scheme in terms of energy efficiency, which can be up to 80% as compared to solutions proposed in the literature. Our developed heuristics show reasonable performance at small rate with almost no gap with the optimal scheme at higher target rates. In our future work, we will consider modeling more than one source and destination nodes and extend this model to include interference scenarios.
-
-
-
Green Techniques for Environment-Aware 5G Networks
Authors: Hakim Ghazzai and Abdullah KadriOver the last decade, mobile communications have been witnessing an unprecedented rise of mobile user demand that is perpetually increasing due to the introduction of new services requiring extremely fast and reliable connectivity. Moreover, there is an important increase of the number of devices connected to cellular networks because of the emergence of the machine-type communication and internet of things. Indeed, data traffic on mobile networks is increasing at a rate of approximately 1.5 to 2 times a year, therefore mobile networks are expected to handle up to 1000 times more data traffic in 10 years time. Because of this huge number of wireless terminals, in addition to the deployed radio access networks (RANs) necessary to serve them, future fifth-generation (5G) cellular networks will suffer from an enormous growth of energy consumption that will cause negative economical and environmental impacts. It is predicted that if no actions are taken, the greenhouse gas (GHG) emissions per capita for ICT are estimated to increase from 100 kg in 2007 to about 130 kg in 2020. Therefore, there is an urgent obligation to develop new techniques and technologies in order to cope up with the exponential energy growth and correspondingly the carbon emission of emerging wireless networks. From a cellular network operator perspective, reducing fossil fuel consumption is not only for behaving green and responsible towards the environment, but also for solving an important economical issue that cellular operators are facing. Indeed, such energy consumption forces mobile operators to pay huge energy bills which actually constitute around the half of their operating expenditures (OPEX). It was shown that, currently, cellular networks consume around 120 TWh of electricity per year and mobile operators pay around 13 billion dollars to serve 5 billion connections per year.
Therefore, there is a growing necessity to develop more energy-efficient techniques to enhance their green performance while respecting the user's quality of experience. Although most of the proposed studies were focusing on individual physical layer power optimizations, more sophisticated and cost-effective technologies should be adopted to meet the green objective of 5G cellular networks. This study investigates three important techniques that could be exploited separately or together in order to enable the wireless operators achieve significant economic benefits and environmental savings:
- Cellular networks powered by the smart grid: Smart grid is widely seen as one of the most important means that enhance energy savings and help optimize some of consumers' green goals. It can considerably help in reducing GHG emissions by optimally controlling and adjusting the consumed energy. Moreover, it allows the massive integration of intermittent renewable sources and offers the possibility to deliver electricity in a more cost-effective way with active involvement of customers in the procurement decision. Therefore, introducing the concept of smart grid as a new tool for managing the energy procurement of cellular networks is considered as an important technological innovation that would significantly contribute to the reduction of mobile CO2 emissions.
- Base station sleeping strategy: Several studies show that over 70% of the power is consumed by base stations (BSs) or long term evolution eNodeB (LTE-eNB) for 4G networks. Turning off redundant or lightly loaded BSs during off-peak hours can contribute to the reduction of mobile network energy consumption and GHG emissions.
- Green networking collaboration among competitive mobile operators: The fundamental idea was to completely turn off the equipment of one service provider and serve the corresponding subscribers by infrastructure belonging to another operator. However, random collaboration may lead to the increase of certain mobile operator's profit at the expense of other competitive operators. This can cause a high energy consumption and a very low profit for the active network. Therefore, fairness criteria should be introduced for this type of collaboration.
In this study, we present in detail the techniques described above and provide multiple simulation results measuring the gain that could be obtained using these techniques compared to that of traditional scenarios.
-
-
-
Vibration Energy Harvesting in Wireless Sensor Networks (WSNs) for Structural Health Monitoring (SHM)
Authors: Loay Ismail, Sara Elorfali and Tarek ElfoulyHarvesting of vibration energy from the ambient environment, such as vibrations experienced by bridges due to vehicle movements, wind, earthquakes, has become an essential area of study by many scientists aiming to design new systems which can improve self-powered network sensors in wireless sensor networks (WSN), thus providing a more efficient system that does not require the human involvement.
One of the essential components of WSN systems is the sensor node. It is used to continuously send/receive information to monitor a certain behavior targeted by the application; for example, to monitor bridge infrastructure's health. Sometimes, sensors are programmed and adjusted to send useful data for monitoring 24 hours a day, seven days a week. This configuration harms the sensors' batteries and shortens their lives, since sending/receiving data consumes power and leads to the reduction of the batteries' voltage levels. Due to this fact, energy harvesting is critical to maintaining long-term batteries that can recharge themselves from the available ambient harvested energy and eliminate the need for human involvement in replacing or recharging them in their specified locations in the network.
Recent structural health monitoring systems (SHM), in civil infrastructure environments, have focused heavily on the use of wireless sensor networks (WSNs) due to their efficient use of wireless sensor nodes. Such nodes can be fixed onto any part of the infrastructure, such as bridges, to collect data remotely for monitoring and further processing. However, the drawback of using such sensor networks relies mainly on the finite life-time of their batteries. Due to this problem, the concept of harvesting energy from the ambient environment became more important. Ensuring efficient battery usage would have a great benefit in maximizing overall systems functionality time and ensures efficient use of natural energy resources like solar, wind and vibration energies.
This work aims to study the feasibility of using a piezoelectric vibration energy harvester to extend overall battery life using a single, external, super-capacitor component which is serving as a storage unit for the harvested energy. The methodology followed in this work states the general direction of the flow of energy in a sensor node which can be summarized into the following:
1 Piezoelectric Vibration Energy Harvester: This module was used to convert mechanical energy of the vibrations from the ambient environment to electrical energy.
2 Energy Harvesting Circuit: This circuit is responsible for the power conditioning, enabling the circuit to output energy to the sensors under certain threshold criteria.
3 Energy Storage: This the super-capacitor served to store harvested energy.
4 Energy Management Scheme: The scheme proposed by this work under the energy requirements and constraints of the sensor nodes in order to conserve batteries voltage level to extend sensors' batteries lives.
5 Wireless Sensors Nodes: Each sensor node type has specific energy requirements that must be recognized so that it can be adequately powered and turned on using the harvested energy.
The main contribution of this work is a proposal of an energy management scheme which ensures that the harvested energy being provided to the harvester circuit must be greater than the energy output that is going to be consumed by the sensor. This proposed scheme has proved the feasibility of using impact vibrations for efficient energy harvesting and subsequently increase the battery life time needed to turn on the wireless sensor nodes.
Furthermore, as a future direction of work, to increase the amount of harvested energy, hybrid power sources can be explored by combining more than one energy source from the ambient environment, such as solar and vibration energy.
-
-
-
Design of a Novel Wireless Communication Scheme that Jointly Supports Both Coherent and Non-Coherent Receivers
Authors: Mohammad Shaqfeh, Karim Seddik and Hussein AlnuweiriAs well known, wireless channels are characterized by temporal and spatial variations due to different reasons such as multipath propagation of the signals, and mobility of the communicating devices or their surrounding environment. This has two consequences. First, the channel quality (i.e. amplitude) varies resulting in changes in the amount of data rate (in bits/sec) that can be received reliably over the channel. Second, the channel “phase” varies, which necessities the ability of the receiver to track these changes reliably in order to demodulate the transmitted signal correctly. This is called “coherent” transmission. If the receiver is unable to track the phase variations, then the transmitter should use “non-coherent” modulation schemes, which can be detected without phase knowledge at the receiver. However, this will be at the cost of significant degradation of the data rate that can be transmitted reliably. Therefore, modern communication systems are supported with channel estimation algorithms in order to enable coherent reception. However, this is not always feasible. Channel estimation is usually accomplished by transmitting pilot signals with some frequency. Depending on the frequency of pilot transmission and the channel coherence time, some receivers might have reliable channel estimates and other receivers might not have that reliable channel estimates. This is one reason why each mobile wireless standard supports some maximum velocities for the mobile users, limited by the frequency of pilot transmission. Mobile users moving at higher speeds might not have reliable channel estimates and this means that they will not be able to receive any information via coherent transmission.
Having this in mind, we are mainly interested in this work in broadcasting systems like mobile TVs. These systems are usually transmitted using coherent modulation schemes in order to enable good quality of reception which cannot be maintained using the “low-rate” non-coherent modulation schemes. Therefore, mobile users with unreliable channel estimates will not be able to receive such applications, while the users with reliable channel estimates can receive the transmitted stream reliably. Therefore, broadcasting applications are characterized by “all or nothing” reception depending on the mobility and the channel conditions of the receiving terminals.
Alternatively, we propose a layered coding scheme from a new viewpoint that has not been addressed before in the literature. We propose a layered transmission scheme with two layers, one layer, base-layer (non-coherent-layer), that can be decoded by any receiver even if it does not have reliable channel estimates, and the other, refining-layer (coherent-layer) that can be only decoded at receivers with reliable channel estimates. The basic bits could be transmitted on the first layer and the extra bits that improve quality could be transmitted on the second layer. Therefore, receivers with unreliable channel estimates can decode the non-coherent layer and the receivers with reliable channel estimates can decode all the information and experience better service quality.
This proposed scheme can be designed using multi-resolution broadcast space-time coding, which allows the simultaneous transmission of low rate (LR) non-coherent information for all receivers, including those with no channel state information (CSI), and high rate (HR) coherent information to those receivers that have reliable CSI. The proposed scheme ensures that the communication of the HR layer is transparent to the underlying LR layer. We can show that both the non-coherent and coherent receivers achieve full diversity, and that the proposed scheme achieves the maximum number of communication degrees of freedom for non-coherent LR channels and coherent HR channels with unitarily-constrained input signals.
-
-
-
Wearable D2D Routing Strategies for Urban Disaster Management – A Case Study
Authors: Dhafer Ben Arbia, Muhammad Mahtab Alam, Rabah Attia and Elyes Ben HamidaCritical and public safety operations require real-time communication from the incident area(s) to the distant operations command center going through the evacuation and medical support areas. Data transmitted throughout such type of network is extremely useful for decisions makers and operations' conducting. Therefore, any delay in communication may cause lives' loss. Above all, the existing infrastructure communication systems (PSTN, WiFi, 4/5 G, etc.) can be damaged and is often not available solution. An alternate option is to deploy autonomous tactical network at unpredictable location and time. However, in this context there are many challenge especially how to effectively rout or disseminate the information. In this paper, we present behavior of varied multi-hops routing protocols evaluated in a disaster-simulated scenario with different communication technologies (i.e. WiFi IEEE 802.11; WSN IEEE 802.15.4; WBAN IEEE 802.15.6). Studied routing strategies are classified: Ad hoc proactive and reactive protocols, geographic-based and gradient-based protocols. To be realistic, we have conducted our simulations by considering a Mall in Doha city in the State of Qatar. Moreover, we have generated a mobility trace to model the rescue teams and crowd movements during the disaster. In conclusion, extensive simulations showed that WiFi IEEE 802.11 is the best wireless technology to consider in an emergency urban with the studied protocols. On the other hand, gradient based routing protocol performed much better especially with WBAN IEEE 802.15.6.
Keyword: Tactical Ad-hoc Networks; Public Safety and Emergency; Routing Protocols; IEEE 802.11; IEEE 802.15.4; IEEE 802.15.6; Performance Evaluation
I. Introduction
Public safety is a worldwide governments' concern. It is a special continuous reactive set of studies, operations and actions in order to predict plan and perform a successful reaction in a disaster case. Coupled with the raise of number and variety of disasters, not only the economies and infrastructures are affected, but significant number of human lives. With regards to the emergency response to these disasters, the role of existing Public Safety Network (PSN) infrastructures (e.g. TETRA, LTE) is extremely vital. However, it is anticipated that, during and after the disasters, existing PSN infrastructures can be flawed, oversaturated or completely damaged. Therefore, there is a growing demand for ubiquitous emergency response system that could be easily and rapidly deployed at unpredictable location and time. Wherefore Wearable Body Area Networks (W-BAN) is a relevant candidate that can play a key role to monitor the physiological status of involved workforces and the affected individuals. Composed by small and low-power devices connected to a coordinator, WBAN communication architecture relies on three levels: On-Body (or intra-Body), Body-to-Body (or inter-Body) and off-Body communication networks.
In disaster scenarios, the network connectivity and data is a challenging problem due to the dynamic mobility and harsh environment [1]. It is envisioned that, in case of unavailable or out-of-range network infrastructures, the WBANs coordinators along with WBANs sensors can exploit cooperative and multi-hop body-to-body communications to extend the end-to-end network connectivity. The Opportunistic and Mobility Aware Routing (OMAR) scheme is an on-body routing protocol proposed in one of our earlier works [2].
A realistic mobility model is also a major challenge related to the simulations. To the best knowledge of the authors, no comparable study in disaster context is conducted by considering a realistic disaster mobility pattern.
In this paper, we investigate varied classes of multi-hop Body-to-Body routing protocols using a realistic mobility modeling software provided by Bonn University in Germany [3]. The mobility pattern is exploited by the WSNET simulator as a mobility trace of the nodes moving during and after the disaster. Note here that individuals are considered mobile nodes in the scenario. In the conducted simulations, each iteration, one communication technology configuration is selected (i.e. WiFi IEEE 802.11; WSN IEEE 802.15.4; WBAN IEEE 802.15.6), then simulations are ran with the routing protocols (i.e. proactive, reactive, gradient-based and geographical-based). This strategy provides a vision not only on the behavior of the routing protocols in the disaster context, but evaluates the communication technologies in such case also. For proactive, reactive, gradient-based and geographical-based routing protocols, Optimized Link State Routing protocol version 2 (OLSRv2) [4], Ad-hoc On-Demand Distance Vector (AODVv2) [5], Directed Diffusion (DD) [6] and Greedy Perimeter Stateless Routing (GPSR) [7] protocols are selected respectively
The remainder of this abstract is organized as follows. In section II, we present briefly the disaster scenario considered. In section III, we explain the results of the simulations. Finally, in Section IV, we conclude and discuss perspectives.
II. Landmark disaster scenario
we are investigating a disaster scenario (fire triggering) in the “Landmark” shopping mall. The mobility model used is generated by the BonnMotion tool. We assume that the incident is caused by fire into two different locations. Rescuers are immediately called to intervene. We consider that firefighters are divided into 3 groups of vehicles with 26 firefighters in each group. Medical emergency teams that probably could reach the mall just after the incident, are consisting of 6 ambulances with 5 medical staff in each ambulance (30 rescuers in total).
Civilians could also help rescuers and they are also considered in the mobility trace generation. Injured individuals are transported from the incident areas to the patients waiting for treatment areas to get first aids. Then, they will be transported to the patients clearing areas where they will be either put under observation or evacuated to hospitals by ambulance or helicopter. A tactical command center conducting the operations is represented by WSNET is an event-driven simulator for wireless networks. WSNET is able to design, configure and simulate a whole communication stack from the Physical to the Application Layer. We benefit from these features to vary the payload with the selected MAC and routing layer in each iteration. These combinations provided a deep review on the possible communication architecture to consider in disaster operations. The following section describes the outcome of these extensive simulations.
III. Performance evaluation
The main difference between the disaster and any other scenario is the emergency aspect. All flowing data in the network is considered highly important. The probability of packets that did not reach the destination must be zero. For this reason, our evaluation is regarded to the Packet Reception rate (PRR). Likewise, a delayed packet is such like an unreceived packet. That's why we consider the delay as decisive factor. Similarly, the energy consumption is also observed.
The following table summarizes the obtained results.
In terms of average PRR, WiFi IEEE 802.11 is convincingly better than the two counterparts combined with all the routing protocols. GPSR has a considerable PRR with WBAN IEEE 802.15.6, but the location information of the nodes is considered as known. Regarding the delay, DD is particularly efficient with WiFi and WBAN.
To conclude, DD is an efficient routing protocol to consider in case of Indoor operations, while GPSR is most relevant in Outdoor operations where locations can be obtained from GPS.
IV. Conclusion
Disasters are growing remarkably worldwide. The existing communication infrastructures are not considered a part of the communication recuing system. Consequently, to monitor deployed individuals (rescue teams, injured individuals, etc.) data should be forwarded throughout these individuals (WBANs) to reach a distant command center. In order to evaluate the performance of diverse multi-hop routing protocols, we have conducted extensive simulations on WNSET using a realistic mobility model. The simulations showed that all evaluated routing protocols (i.e.AODVv2, OLSRv2, DD and GPSR) has the best PRR with the WiFi technology. While, DD was found to be the most efficient with the WBAN technology. GPSR also could be considered when the location information is available.
Acknowledgment
The work was supported by NPRP grant #[6-1508-2-616] from the Qatar National Research Fund which is a member of Qatar Foundation. The statements made herein are solely the responsibility of the authors.
References
[1] M. M. Alam, D. B. Arbia, and E. B. Hamida, “Device-to-Device Communication in Wearable Wireless Networks,” 10th CROWNCOM Conf., Apr-2015.
[2] E. B. Hamida, M. M. Alam, M. Maman, and B. Denis, “Short-Term Link Quality Estimation for Opportunistic and Mobility Aware Routing in Wearable Body Sensors Networks,” WIMOB 2014 2014 IEEE 10th Int. Conf. Wirel. Mob. Comput. Netw. Commun. WiMob, pp. 519–526, Oct-2014.
[3] N. Aschenbruck, “BonnMotion: A Mobility Scenario Generation and Analysis Tool.” University of Osnabruuck, Jul-2013.
[4] T. Clausen, C. Dearlove, P. Jacquet, and U. Herberg, “RFC7181: The Optimized Link State Routing Protocol Version 2” Apr-2014.
[5] C. Perkins, S. Ratliff, and J. Dowdell, “Dynamic MANET On-demand (AODVv2) Routing draft-ietf-manet-dymo-26.” Feb-2013.
[6] C. Intanagonwiwat, R. Govindan, and D. Estrin, “Directed diffusion: a scalable and robust communication paradigm for sensor networks,” pp. 56–67, 2000.
[7] B. Karp and H. T. Kung, “GPSR: Greedy Perimeter Stateless Routing for Wireless Networks,” Annu. ACMIEEE Int. Conf. Mob. Comput. Netw. MobiCom 2000, no. 6, 2000.
-