- Home
- Conference Proceedings
- Qatar Foundation Annual Research Conference Proceedings
- Conference Proceeding
Qatar Foundation Annual Research Conference Proceedings Volume 2016 Issue 1
- Conference date: 22-23 Mar 2016
- Location: Qatar National Convention Center (QNCC), Doha, Qatar
- Volume number: 2016
- Published: 21 March 2016
501 - 600 of 656 results
-
-
Accelerating Data Synchronization between Smartphones and Tablets using PowerFolder in IEEE 802.11 Infrastructure-based Mesh Networks
Authors: Kalman Graffi and Andre IppischSmartphones are nowadays widely available and popular in first, second and third world countries. Nevertheless, their connectivity is limited to GPRS/UMTS/LTE connections with small monthly data plans even if the devices are nearby. Local communication options such as Bluetooth or NFC on the other hand only provide very poor data rates. Future applications envision smartphones and tablets as main working and leisure devices. Envisioning the upcoming trends on multimedia and working documents, file sizes are expected to grow further. In strong contrast to this, smartphone users are faced with very small data plans per month that they can use to communicate over the Internet. In maximum a few GB are available per month, while regularly 1 GB or much less mobile data traffic is available to smartphone users worldwide. Thus, the future data exchange between smartphones and tablets is very much limited through the current technology and business models. Even if the business models on wireless Internet plans would allow unlimited data exchange, still most regions of the world would be limited in connectivity and data exchange options. This striking limitation in the connectivity of the future's main communication devices, namely smartphones and tablets, is striking as with current solutions the data exchange between these devices is strongly handicapped. Taking into account that data exchange and synchronization often takes place to geographically close areas, to closeby friends or nearby colleagues, it is very strange that the local data exchange is so strongly limited due to missing or traffic-limited connectivity to the Internet.
In this presentation, we present a novel approach to enhance high speed data transfers between smartphones and/or tablets in local environments. In specific, we enable with our approach high transmission speeds of up to empirically tested 60 Mbit/s between nearby nodes in any environment. Our approach does not require any contract with Mobile Internet Providers and is not limited to any data plan restriction. We introduce a novel IEEE 802.11 Infrastructure Mode based mesh networking approach, which allows Android phones to create multihop mesh networks, despite the limitation of the WiFi standard. With this approach, wireless multihop networks can be built and are evaluated in empirical measurements, allowing the automatic synchronization and data exchange between smartphones and tablets in the near of up to 100 meters for a single hop and several wireless hops distance. The use cases are considering colleagues working in the same building that would like to exchange data on their smartphones and tablets. The data requests and offerings are thereby signaled in the multihop environment and dedicated wireless high-speed connections are established to transfer the data. Another use case is the local communication with mails, images and chat messages with friends nearby. As the communication approach also supports multicasting, several users can be addressed with single messages. Another use case is a wireless file sharing or data exchange service which allows users to specify their interest such as in action movies at a campus or information on sale promotions in a mall. While the user walks along, his smartphone picks up relevant data without user interaction. One relevant use case for Qatar is the area of smart cities, smart devices would also be able to pick up sensor data from buildings, local transport vehicles or citizens in a delay-tolerant fashion and thus using the GPS functionality of Android to deliver accurate sensor information tagged with the location and time of the sensor snapshot. Finally, this local, high-speed wireless data synchronization approach allows to implement a data-synchronization approach such as Dropbox, Box or PowerFolder between Smartphones and tablets, a novel feature on the market. To implement this idea, we are collaborating with PowerFolder, Germany's market leader in the field of data synchronization at universities and in academia. From the technology side, we observe that Android has a market share of around 80% worldwide as operating system of smartphones. While Android supports very well the connection to the Internet and cloud-based services, it only offers Bluetooth and NFC for local connectivity, both technologies only provide very low data rates. Wifi-Direct is also supported but requires similarly to Bluetooth lots of user interaction and thus does not scale to many contact partners. The IEEE 802.11 standard supports an ad hoc mode for local, high-speed wireless communication. Unfortunately, Android does not support the IEEE 802.11 ad hoc mode which would allow local high speed connections of up to 11 Mbit/s. Instead, we use the Infrastructure Mode of IEEE 802.11 to create ad hoc wireless mesh networks which support much higher data rates, of up to 54 Mbit/s and even more. Please note that WiFi Direct also claims to allow similar performance, but it heavily requires user interaction and failed in our empirical studies to connected more than 3–4 nodes reliably, we are aiming for hundreds of nodes interaction in a delay-tolerant manner without user interaction. Our aim is to 1. use only unrooted Android functionality, 2. to allow nodes to find themselves through the wireless medium, 3. to automatically connect and exchange signaling information without user interaction, 4. to decide on messages to exchange in a delay-tolerant manner supporting opportunistic networking, 5. to coordinate the transmission between various nodes in case that more than one node is in proximity, 6. allow single hop data transfers based on the signaled have and need information,7. allow multihop node discovery and thus 8. multihop routing of data packets. We reach our aim through using the unrooted Android API that allows apps to open a WiFi hotspot for Internet tethering. While we do not want to use the Internet, the service also allows clients to connect to the hotspot and to exchange messages with the hotspot. Using this approach, i.e. when an Android phone or tablet opens a hotspot, other Android devices running the App can join in without user interaction. For that the App on the joining node creates a list of known WiFi networks, which are consistently named: “P2P-Hotspot-[PublicKeyOfHotspotNode]”. The public key of the hotspot node is used as unique identifier of the node and as option to asymmetrically encrypt the communication to that node. With this Android API functionalities we are able to dynamically connect unrooted, casual Android devices that run our App. The IEEE 802.11 Infrastructure Mode that we are using brings the characteristics that the access point, in our case the hotspot, is engaged in any communication with attached clients. Clients can only communicate to each other through the hotspot. The clients all share the available bandwidth of the WiFi cell to communicate to the hotspot. Thus for a set of nodes, it is more advisable to have dedicated 1-to-1 connections with one node being the hotspot and the other one the client for fast high-speed transmission rather than having all nodes connected over the hotspot and sharing their bandwidth. In order to to support this, we differentiate between a dedicated signaling phase and a dedicated data transfer phase. Nodes are scanning the available WiFi networks and look for specific SSIDs, namely “P2P-Hotspot-[PublicKeyOfHotspotNode]”, these are available hotspots. As the App considers these networks as known and the access key is hardcoded, a direct connection can be established without user interaction. These steps fulfill requirements 1. and 2. Next, the node is signaling the data it contains for the various destination nodes also addressed to nodes with specific public keys as node identifiers. The nodes can also signal data that they generally share based on a keyword basis. The hotspot gathers these signaling information from its clients and creates an index on which node has what (have list) and wants what (want list). Based on this, the potential matches can be calculated and are communicated to the clients. In order to have direct high speed connections, these node release their connection to the hotspot and establish a new connection between each other: one as a hotspot and one as connected client. This step is very dynamic and allows closely located nodes to connect and exchange data based on their signaling to the previous hotspot. The freshly connected nodes also signal again their offerings and interest, to confirm the match. The following high speed 1-to-1 data transfer can reach up to 60 Mbit/s, which is much more than the 11 Mbit/s of the traditional IEEE 802.11 ad hoc mode. Once they are done with the transfer, they release their link and listen again on the presence of coordinating hotspots. If no hotspot is available, based on random timeouts, they themselves offer this role and create a hotspot. Hotspots are only actively waiting for clients for a short time and then trying to join in a hotspot themselves. The roles fluctuate constantly. Thus the network is constantly providing connection options to nearby nodes through dynamic hotspotting. Hotspots index the content of connected clients and coordinate ideally matching 1-to-1 pairs for high speed transfers. Thus, requirements 2.-6. are resolved. Finally, to implement requirements 7. and 8., the nodes also maintain the information on the hotspots they met and the nodes they exchanged data with, thus creating a time depending local view on the network. In addition to the signaling of the data files they offer, the nodes also signal to hotspots they meet their connectivity history. Hotspots on the other side share this information with newly joining clients, thus creating a virtual view on the topology of the network. Using this information, nodes can decide in which direction, i.e. over which nodes, to route a message, multihop routing is supported. Step-by-step and in a delay tolerant manner, data packets can be passed on until they reach the destination. This approach for opportunistic, multihop routing is fulfilling requirement 6 and 7. In close cooperation with PowerFolder, we implemented this approach and evaluated the feasibility and performance of the approach. PowerFolder is the leading data synchronization solution in the German market and allows to synchronize data between desktop PCs. With our extension, it is also possible to synchronize data across smartphone and tablets directly, thus saving mobile data traffic available in the data plan and on the other side supporting a fast and one of its class transmission speeds. We implemented our approach in an Android app and performed various tests on the connectivity speed, the transmission times and the reliability of the solution. Our measurements show that transmission speeds of up to 60 Mbit/s are reached in the closest proximity and even around 10 Mbit/s are obtainable in 100 meters distance. The multihop functionality is working reliably with a decrease in the transmission speed related to the distance and the number of hops. We also experimented with direct hop-wise and end-to-end transmission encryption and the resulting security gain. The encryption speed is reasonable for small files. Please note that using the public key infrastructure which we established, it is both possible to encrypt data hop-by-hop but as well end-to-end. Our approach presents a novel networking approach for the future connected usage of smartphones and tables in the information-based society. Addressing one of the grand challenges of Qatar, we are optimistic that the approach is also very suitable in Qatar to support the societies' shift towards better usability and more secure and higher bandwidth data exchange.
-
-
-
Mixed Hybrid Finite Element Formulation for Subsurface Flow and Transport
Authors: Ahmad S. Abushaikha, Denis V. Voskov and Hamdi A. TchelepiWe present a mixed hybrid finite element formulation for modelling subsurface flow and transport. The formulation is fully implicit in time and employs tetrahedron elements for the spatial discretization for the subsurface domain. It comprises all the main physics that dictate the flow behaviour for subsurface flow and transport, since it is developed on, and inherits them from the Automatic Differentiation General Purpose Research Simulator (AD-GPRS) of Stanford University Petroleum Research Institute (SUPRI-B).
Traditionally, the finite volume formulation is the method employed for the computation of fluid dynamics and reservoir simulation, thanks to its local conservation of mass and energy, and straight-forward implementation. However, it requires the use of structural grids and fails in handling high anisotropy inside the material properties of the domain. Also, the method is of a low computational order; the computed local solution in the gird is piecewise constant.
Here, we use the mixed hybrid finite element formulation which is of high order and can handle the high anisotropy for the material properties. It solves the momentum and mass balance equations simultaneously, hence the name mixed. This strongly coupled scheme facilitates the use of unstructured grids which are important for modelling the complex geometry of the subsurface reservoirs. The Automatic Differentiation library of AD-GPRS automatically differentiates the computational variables needed for the construction the Jacobian matrix which consists of the momentum and mass balance unknowns and any presence of wells.
We use two types of tetrahedron elements, Raviart Thomas (RT0) and Brezzi-Douglas-Marini (BDM1), low and high order respectively. The RT0 has one momentum equation per interface, and the BDM1 has three momentum equations per interface assuring second-order flux approximation. Therefore, when compared to the finite volume approach where the Jacobian consists of the mass balance and well unknowns only, the mixed hybrid formulation will eventually have a larger Jacobian (a one order of magnitude for the high order element) which is computationally expensive. However, none the less, the formulation converges numerically and physically better than the finite volume approach, as we show.
The full system is solved implicitly in time to account for the non-linear behaviour of the flow and transport at the subsurface level which is highly pressure, volume, and temperature (PVT) dependent. Therefore, we make use of the already robust PVT formulations in AD-GPRS. We present a carbon dioxide (CO2) sequestration case for the Johnson formation and discuss the numerical and computational results.
This is of crucial important for Qatar and the Middle East where effective reservoir modelling and management requires a robust representation of the flow and transport at the subsurface level using state of the art formulations.
In the literature, Wheeler et al. (2010) employ a multipoint flux mixed finite element approach to eliminate the momentum balance equation form the Jacobian and substitute it by the well-established multipoint flux approximation (MPFA) in the mass balance equation. Since it is based on MPFA, it will still suffer from convergence issues where high anisotropy is present in the material properties. They have recently expanded their work to compositional modelling of fluid, Singh & Wheeler (2014), however they solve the system sequentially in time, where in our method we solve the system fully implicit in time. Sun & Firoozabadi (2009) solve the pressure implicitly and the fluid properties explicitly in time by further decoupling the mass balance equations which decreases the physical representation of the non-linear behaviour of the flow and transport at the subsurface level.
-
-
-
Develop a Global Scalable Remote Laboratory Based on a Unified Framework
Authors: Hamid Parsaei, Ning Wang, Qianlong Lan, Xuemin Chen and Gangbing SongInformation technology has had a great impact on education and research by enabling additional teaching and researching strategies. According to the 2014 Sloan Survey of Online Learning, the number of students who have taken at least one online course increased to a new total of 7.1 million during the Fall 2013 semester. Remote laboratory technology has made a great progress in the arena of online learning. Internet remote controlled experiments were previously implemented based on the unified framework at UH and TSU. However, end users of the framework were required to install the LabVIEW plug-in in the web browsers to support online usage of remote experiments and the framework only supported the desktop and laptop. In order to resolve the plug-in issues, a novel unified framework is proposed. This unified framework is based on the Web 2.0 and HTML 5 technology. As shown in Fig. 1, there are three layer applications in the unified framework: the client application layer, the server application layer and the experiment control layer. The client web application is based on HyperText Markup Language (HTML), Cascading Style Sheets (CSS) and JQuery/JQuery-Mobile JavaScript libraries. The Mashup technology is used for user interface implementation. The client web application can be run in most of current popular browsers such as IE, Firefox, Chrome, Safari etc. The server application is based on Web Service technology and directly built on top of MySQL database, Apache web server engine and Node.js web server engine. The server application utilizes JSON and Socket.IO, which is developed based on web socket protocol to implement the real-time communication between the server application and the client-web application (Rai, R.,). The server application runs on LANMP (Linux/Aparche/Node.js/MySQL/PHP) server. The experiment control application is based on the LabVIEW, and uses Socket.IO for real time communication with server application. The remote laboratory based on the novel unified framework is able to run on many different devices, such as desktop and laptop PCs, iPad, Android Pad, smart phone, etc., without software plug-ins. However, there are still some challenges remaining for remote laboratory development as follows: 1) How to access remote experiments installed at different laboratories through a single webpage? 2) How to manage the remote experiments at the different laboratory? 3) How to resolve the challenges of system safety issues? In order to resolve these challenges, a new scalable global remote laboratory was implemented at Texas A&M University at Qatar (TAMUQ) based on the improved novel unified framework. To integrate three different remote laboratories at TAMUQ, UH and TSU, the new global scalable remote laboratory architecture was designed and developed at TAMUQ. The labs operate with a unified scheduler, a federated authentication module and user management system. Meanwhile, a scalable server was also setup at TAMUQ to support expansion of the remote laboratory. Figure 2 shows the global scalable remote laboratory architecture. In this scalable remote laboratory, the laboratory center server at TAMUQ will consist of a scalable server connected the other two lab center server in UH and TSU. All of three laboratory center servers are based on Linux/Node.js/Apache/MySQL/PHP (LNAMP) architecture. Socket.io, which is a new real-time communication technology, was used to manage the experimental data and other user information (such as User profile, Login information, etc.) transmission in this global platform. The center server at TAMUQ was designated as the center proxy server for the scalable remote laboratory. With this global platform, terminal users can use all of remote experiments of these three universities via one website. With the new global scalable remote laboratory based on the novel unified framework, the scalable scheduler and federated authentication solution was designed and implemented. At the same time, issues with security control and management of experiment access were solved through taking the full advantage of the functionalities offered by the MD5 encryption and decryption algorithm based security management engine. As shown in Fig. 3, the new user interface was also developed and integrated into the new scalable remote laboratory. With the new global scalable remote laboratory, future teaching and learning activities at TAMUQ, UH and TSU will be improved. Meanwhile, the improved unified framework will significantly benefit remote laboratory development in future as well.
-
-
-
The Assessment of Pedestrian-Vehicle Conflicts at Crosswalks Considering Sudden Pedestrian Speed Change Events
Authors: Wael Khaleel Alhajyaseen and Miho IryoIntroduction
Pedestrians are vulnerable road users. In Japan, more than one-third of the fatalities in traffic crashes are pedestrians and most accidents occur as the pedestrians cross a road. To evaluate alternative countermeasures effectively, recently traffic simulation is considered as one of the powerful decision support tools (Shahdah et al. 2015). A very important requirement for a reliable utilization of traffic simulations for the safety assessments is the proper representation of road user behaviors at potential conflict areas. Severe conflicts usually occur when road users fail to predict other users’ decisions and properly react to it. The widely varying behaviors and maneuvers of vehicles and pedestrians may lead to misunderstanding their decisions, which can result in severe conflicts. So far, most existing studies assume constant walk speeds for pedestrians and complete obedience to traffic rules when crossing roads as if they are at walkways. However, it is known that pedestrian behave differently at crosswalks compared to other walking facilities such as sidewalks and walkways. Pedestrians tend to walk faster at crosswalks (Montufar et al. 2007). Furthermore, their compliance to traffic signals vary by traffic conditions and other factors (Wang et al. 2011). Although many studies have analyzed pedestrian behavior including speed at crosswalks, most of them are based on the average crossing speed without considering the speed profile of the crossing process and the variations within. Iryo-Asano et al. (2015) observed from empirical data that pedestrians may suddenly and significantly change their speed on crosswalks as a reaction to surrounding conditions. Such speed changes cannot be predicted by drivers, which can lead to safety hazards. A study of the speed change maneuvers is critical for representing the potential collisions in the simulation systems and evaluating the probability and severity of collisions reasonably. The objective of this study is to quantitatively model the pedestrian speed change maneuvers and integrate the model into traffic simulation for assessing traffic safety.
Pedestrian speed change events as critical maneuver Figure 1 shows an observed pedestrian trajectory with sudden speed change. If there is a turning vehicle approaching the conflict area, the driver may behave based on his expectation of pedestrian arrival time to the conflict area. If the pedestrian suddenly changes his/her speed close to the conflict area, drivers will not be able to predict the new arrival time, which might lead to severe conflicts. Figure 1 demonstrates a real observed example of such speed change. The pedestrian suddenly increased his speed at the beginning of the conflict area, which yielded to the early arrival to the conflict area by 2.0 seconds (Tdif) than expected time assuming that the pedestrian will continue with his/her speed. A turning vehicle cannot predict this early arrival if it exists at the same time. Furthermore, these 2 seconds are large in terms of collision avoidance Iryo-Asano et al. (2015) showed that timings and locations of pedestrians speed changes mainly occur at the entrance to the pedestrian-vehicle conflict area and 2) when there is a large gap between pedestrian's current speed and his/her necessary speed to complete crossing before the end of pedestrian flashing green interval. In this study, further in-depth analysis is conducted by combining the pedestrian data and the information of approaching vehicle trajectories to identify the influencing factors on pedestrians’ sudden speed change events. The probability of speed change is quantitatively modeled as functions of the remaining green time, the remaining length to cross, the current walking speed and other related variables.
Simulation integration for safety assessment
The proposed pedestrian maneuver model is implemented into an integrated simulation model by combining it with a comprehensive turning vehicle maneuver model (Dang et al. 2012). The vehicle maneuver model is dedicated to represent probabilistic nature of drivers’ reaction to road geometry and surrounding road users in order to evaluate user behavior upon traffic safety. It produces speed profiles of turning vehicles considering the impacts of geometry (i.e. intersection angles, setback distance of the crosswalks) and the gap between the expected arrival time of the vehicle and that of the pedestrians at the conflict area. The proposed model allows us to study the dependencies and the interactions between pedestrians and turning vehicle s at crosswalks. Using the integrated traffic simulation, pedestrian-vehicle conflicts are generated and surrogate safety measures, such as Post Encroachment Time and the vehicle speeds at conflict points, are estimated. These measures are used to evaluate the probability and severity of pedestrian-vehicle conflicts. To verify the characteristics of the simulated conflicts, estimated and observed surrogate safety measures at a selected signalized crosswalk are compared through statistical tests.
Conclusions
The consideration of sudden speed change behavior of pedestrians in the simulation environment generates more reliable and realistic pedestrian maneuvers and turning vehicle trajectories, which enables more accurate assessment of pedestrian-vehicle conflicts. This enables the assessment of improvements in the signal control settings and the geometric layout of crosswalks towards safer and more efficient operations. Furthermore, the model is useful for the real-time hazardous conflict event detection, which can be applied to the vehicle safety assistance systems.
This research is supported by JSPS KAKENHI Grant No. 15H05534. The authors are grateful to Prof. Hideki Nakamura and Ms. Xin Zhang for providing video survey data.
References
Dang M.T., et al. (2012). Development of a Microscopic Traffic Simulation Model for Safety
Assessment at Signalized Intersections, Transportation Research Record, 2316, pp. 122?131.
Iryo-Asano, M., Alhajyaseen, W., Zhang, X. and Nakamura, H. (2015) Analysis of Pedestrian Speed Change Behavior at Signalized Crosswalks, 2015 Road Safety & Simulation International Conference, October 6th–8th, Orlando, USA.
Montufar, J., Arango, J., Porter, M., and Nakagawa, S. (2007), The Normal Walking Speed of Pedestrians and How Fast They Walk When Crossing The Street, Proceedings of the 86th Annual Meeting of the Transportation Research Board, Washington D. C., USA.
Shahdah U,. et al. (2015), Application of traffic microsimulation for evaluating safety performance of urban signalized intersections, Transportation Research Part C, 60, pp. 96?104.
Wang, W., Guo, H., Gao, Z., and Bubb, H. (2011) Individual Differences of Pedestrian Behaviour in Midblock Crosswalk and Intersection, International Journal of Accident worthiness, Vol. 16, No. 1, pp. 1–9.
-
-
-
Bi-Text Alignment of Movie Subtitles for English-Arabic Statistical Machine Translation
Authors: Fahad Ahmed Al-Obaidli and Stephen CoxWith the increasing demand for access to content in foreign languages in recent years, we have also seen a steady improvement in the quality of tools that can help bridge this gap. One such tool is Statistical Machine Translation (SMT), which learns automatically from real examples of human translations, without the need for manual intervention. Training such a system takes just a few days, sometimes even hours, but requires a lot of sentences aligned to their corresponding translations, a resource known as a bi-text.
Such bi-texts contain translations of written texts as they are typically derived from newswire, administrative, technical and legislation documents, e.g., from the EU and UN. However, with the widespread use of mobile phones and online conversation programs such as Skype as well as personal assistants such as Siri, there is a growing need for spoken language recognition, understanding, and translation. Unfortunately, most bi-texts are not very useful for training a spoken language SMT system as the language they cover is written, which differs from speech in style, formality, vocabulary choice, length of utterances, etc.
It turns out that there exists a growing community-generated source of spoken language translations, namely movie subtitles. These come in plain text in a common format in order to facilitate rendering the text segments accordingly. The dark side of subtitles is that they are usually created for pirated copies of copyright-protected movies. Yet, their use in research is an exploitation of a “positive side effect” of Internet movie piracy, which allows for easy creation of spoken bi-texts in a number of languages. This alignment typically relies on a key property of movie subtitles, namely the temporal indexing of subtitle segments, among with other features.
Due to the nature of movies, subtitles differ from other resources in several aspects: they are mostly transcriptions of movie dialogues that are often spontaneous speech, which contains a lot of slang, idiomatic expressions, and also fragmented spoken utterances, with repetitions, errors and corrections, rather than grammatical sentences; thus, this material is commonly summarised in the subtitles, rather than being literally transcribed. Since subtitles are user-generated, the translations are free, incomplete and dense (due to summarization and compression) and, therefore, reveal cultural differences. Degrees of rephrasing and compression vary across languages and also depend on subtitling traditions. Moreover, subtitles are created to be displayed in parallel to a movie in order to be linked to the movie's actual sound signal. Subtitles also arbitrarily include some meta information such as the movie title, year of release, genre, subtitle author/translator details and trailers. They may also contain visual translation, e.g., into a sign language. Certain versions of subtitles are especially compiled for the hearing-impaired to include extra information about non-spoken sounds that are either primary, e.g., coughing, or secondary background noises, e.g., soundtrack music, street noise, etc. This brings yet another challenge to the alignment process: the complex mappings caused by many deletions and insertions. Furthermore, subtitles must be short enough to fit the screen in a readable manner and are only shown for a short time period, which presents a new constraint to the alignment of different languages with different visual and linguistic features.
The languages a subtitle file is available for differ from one movie to another. Commonly, the Arabic language, even though spoken by more than 420 million people worldwide, and being the 5th most spoken language worldwide, has relatively scarce online presence. For example, according to Wikipedia's statistics of article counts, Arabic is ranked 23rd. Yet, Web traffic analytics shows that search queries for Arabic subtitles and traffic from the Arabic region are among the highest. This increase in demand for Arabic content is not surprising with the recent dramatic economic and socio-political shift in the Arab World. On another note, Arabic, as a Semitic language, has a complex morphology, which requires special handling when mapping it to another language and therefore poses a challenge for machine translation.
In this work, we look at movie subtitles as a unique source of bi-texts in an attempt to align as many translations of movies as possible in order to improve English to Arabic SMT. Translating from English into Arabic is an underexplored translation direction and, due to the morphological richness of Arabic among with other factors, yields significantly lower results compared to translating in the opposite direction (Arabic to English).
For our experiments, we collected pairs of English-Arabic subtitles for more than 29,000 movies/TV shows, which is a collection that is bigger than any preexisting subtitle data set. We designed a sequence of heuristics to eliminate the inherent noise that comes with the subtitles' source in order to yield good quality alignment. We used time overlap to align the subtitles by utilising the time information provided within the subtitle files and measuring the time overlap. This alignment approach is language-independent and outperforms other traditional approaches such as the length-based approach, which relies on segment boundaries to match translation segments, as segment boundaries differ from one language to another, e.g., because of the need to fit the text on the screen.
Our goal was to maximise the number of aligned sentence pairs while minimising the alignment errors. We evaluated our models relatively and also extrinsically, i.e., by measuring the quality of an SMT system that used this bi-text for training. We automatically evaluated our SMT systems using BLEU, a standard measure for machine translation evaluation. We also implemented an in-house Web application tool in order to crowd-source human judgments comparing the SMT baseline's output and our best-performing system's output.
Our experiments yielded bi-texts of varied size and relative quality, which we used to train an SMT system. Adding any of our bi-texts improved the baseline SMT system, which was trained on TED talks from the IWSLT 2013 competition. Ultimately, our best SMT system outperformed the baseline by about two BLEU points, which is a very significant improvement, clearly visible to humans; this was confirmed in manual evaluation. We hope that the resulting subtitles corpus, the largest collected so far (about 82 million words), will facilitate research in spoken language SMT.
-
-
-
A Centralized System Approach to Indoor Navigation for the Visually Impaired
Authors: Alauddin Yousif Al-Omary, Hussain M. Al-Rizzo and Haider M. AlSabaghPeople who are Blind or Visually Impaired (BVI) have one goal in common: navigate through unfamiliar indoor environments without the intervention of a human guide. The number of blind people in the world is not accurately known at the present, however, based on the 2010 Global Data on Visual Impairments, World Health Organization, approximately 285 million people are estimated to be visually impaired worldwide: 39 million are blind and 246 have low vision, 90% in developing countries, with 82% of blind people aged 50 and above. Available extrapolated statistics about blindness in some countries in the Middle East show ∼102,618 in Iraq, ∼5,358 in Gaza strip, ∼22,692 in Jordan, ∼9,129 in Kuwait, ∼104,321 in Saudi Arabia, and ∼10,207 in the United Arab Emirates. These statistics reveal the importance of developing a useful, accurate, and easy to use navigation system to help this large population of disabled people in their everyday lives. Various commercial products are available to navigate BVI people in outdoor environments based on the Global Positioning System (GPS) where the receiver must has a clear view of the sky. Indoor geo-location, on the other hand, is much more challenging because objects surrounding the user can block or interfere with the GPS signal.
In this paper, we present a centralized wireless indoor navigation system to aid the BVI. The system is designed not only to accurately locate, track, and navigate the user, but to also find the safest travel path and easily communicate with the BVI. A centralized approach is adopted because of the lack of research in this area. Some proposed navigation systems require users to inconveniently carry heavy navigation devices; some require administrators to install a complex network of sensors throughout a building; and others are simply impractical in practice. The system consists of four major components: 1) Wireless Positioning Subsystem, 2) Visual Indoor Modeling Interface, 3) Guidance and Navigation Subsystem, and 4) Path-Finding Subsystem. The system is designed not only to accurately locate, track, and navigate the user, but to also find the safest travel path and easily communicate with the BVI.
A significant part of the navigation system is the virtual modeling of the building and the design of the path-finding algorithms, which will be the main focus of this research. Ultimately, the proposed system provides the design and building blocks for a fully functional package that can be used to build a complete centralized indoor navigation system, from creating the virtual models for buildings to tracking and interacting with BVI users over the network.
-
-
-
Evaluation of Big Data Privacy and Accuracy Issues
Authors: Reem Bashir and Abdelhamid Abdelhadi MansorNowadays a lot of massive data is stored and typically, the data itself contains a lot of non-trivial but useful information. Data mining techniques can be used to discover this information which can help the companies for decision-making. However, in real life applications, data is massive and is stored over distributed sites. One of my major research topics is to protect privacy over this kind of data. Previously, the important characteristics, issues and challenges related to management of the large amount of data has been explored. Various open source data analytics frameworks that deal with large amount of Data analytics workloads have been discussed. Comparative study between the given frameworks and suitability of the same has been proposed. Digital universe is flooded with huge amount of data generated by number of users worldwide. These data are of diverse in nature, come from various sources and in many forms. To keep with the desire to store and analyze ever larger volumes of complex data, relational databases vendors have delivered specialized analytical platforms that come in many shapes and sizes from software only to analytical services that run in third party hosted environments. In addition new technologies have emerged to address exploding volumes of complex data, including web traffic, social media content and machine generated data including sensor data, global positioning system data.
Big data is defined as large amount of data which requires new technologies and architectures so that it becomes possible to extract value from it by capturing and analysis process. Due to such large size of data it becomes very difficult to perform effective analysis using the existing traditional techniques. Big data has become a prominent research field, especially when it comes to decision making and data analysis. However, Big data due to its various properties like volume, velocity, variety, variability, value and complexity put forward many challenges. Since Big data is a recent upcoming technology in the market which can bring huge benefits to the business organizations, it becomes necessary that various challenges and issues associated in bringing and adapting to this technology are brought into light. Another challenge is that data collection may not have enough accuracy which will lead to a non-consistent analysis that can critically affect the decision based on this analysis. Moreover, it is clearly apparent that organizations need to employ data-driven decision making to gain competitive advantage. Processing, integrating and interacting with more data should make it better data, providing both more panoramic and more granular views to aid strategic decision making. This is made possible via Big Data exploiting affordable and usable Computational and Storage Resources. Many offerings are based on the Map-Reduce and Hadoop paradigms and most focus solely on the analytical side. Nonetheless, in many respects it remains unclear what Big Data actually is; current offerings appear as isolated silos that are difficult to integrate and/or make it difficult to better utilize existing data and systems. Since data is growing at a huge speed making it difficult to handle such large amount of data (Exabyte). The main difficulty in handling such large amount of data is because that the volume is increasing rapidly in comparison to the computing resources. The Big data term which is being used now a days is kind of misnomer as it points out only the size of the data not putting too much of attention to its other existing properties.
If data is to be used to make accurate decisions in time it becomes necessary that it should be available in accurate, complete and timely manner. This makes the data management and governance process bit complex adding the necessity to make Data open and make it available to government agencies in standardized manner with standardized APIs, metadata and formats thus leading to better decision making, business intelligence and productivity improvements.
This paper presents a discussion and evaluation for the most prominent techniques used in the processes of data collection and analysis in order to identify the privacy defects in them that affects the accuracy of big data. Depending on the results of this analysis, recommendations were provided for improving data collection and analysis techniques that will help to avoid if not all then most of the problems facing the use of big data in decision making. Keywords: Big Data, Big Data Challenges, Big Data Accuracy, Big Data Collection, Big Data Analytics.
-
-
-
Video Demo of LiveAR: Real-Time Human Action Recognition over Live Video Streams
By Yin YangWe propose to present a video demonstration of LiveAR at the ARC'16 conference. For this purpose, we have prepared three demo videos, which can be found in the submission files. These video demos show the effectiveness and efficiency of LiveAR running on video streams containing a diverse set of human actions. Additionally, the demo also exhibits important system performance parameters such as latency and resource usage.
LiveAR is a novel system for recognizing human actions, such as running and fighting, in a video stream in real time, backed by a massively-parallel processing (MPP) platform. Although action recognition is a well-studied topic in computer vision, so far most attention has been devoted to improving accuracy, rather than efficiency. To our knowledge, LiveAR is the first that achieves real-time efficiency in action recognition, which can be a key enabler in many important applications, e.g., video surveillance and monitoring over critical infrastructure such as water reservoirs. LiveAR is based on a state-of-the-art method for offline action recognition which obtains high accuracy; its main innovation is to adapt this base solution to run on an elastic MPP platform to achieve real-time speed at an affordable cost.
The main objectives in the design of LiveAR are to (i) minimize redundant computations, (ii) reduce communication costs between nodes in the cloud, (iii) allow a high degree of parallelism and (iv) enable dynamic node additions and removals to match the current workload. LiveAR is based on an enhanced version of Apache Storm. Each video manipulation operation is implemented as a bolt (i.e., logical operator) executed by multiple nodes, while the input frame arrive at the system via a spout (i.e., streaming source). The output of the system is presented on screen using FFmpeg.
Next we briefly explain the main operations in LiveAR. The dense point extraction bolt is a first step for video processing, which has two input streams: the input video frame and the current trajectories. The output of this operator consists of dense points sampled in the video frame that are not already on any of the current trajectories. In particular, LiveAR partitions the frame into different regions, and assigns one region to a dense point evaluator, each running in a separate thread. Then, the sampled coordinates are grouped according to the partitioning, and routed to the corresponding dense point evaluator. Meanwhile, coordinates on current trajectories are similarly grouped by a point dispatcher, and routed accordingly. Such partitioning and routing minimizes network transmissions as each node is only fed the pixels and trajectory points it needs.
The optic flow generation operator is executed by multiple nodes in parallel similarly to the dense point extractor. An additional challenge here is that the generation of optic flows involves (i) comparing two frames at consecutive time instances and (ii) multiple pixels in determining the value of the flow in each coordinate. (i) means that the operator is stateful, i.e., each node must store the previous frame and compare with the current one. Hence, node additions and removals (necessary for elasticity) become non-trivial as a new node does not immediately possess the necessary states to work (i.e., pixels on the previous frame) on its inputs. Regarding (ii), each node cannot simply handle a region in the frame, as is the case in the dense point extractor, as the computation at one coordinates relies on the surrounding pixels. Our solution in LiveAR is to split the frame into overlapping patches; each patch contains a partition of the frame, as well as the pixels surrounding the partition. This design effectively reduces the amount of network transmissions, thus improving system scalability.
Lastly, the trajectory tracking operator involves three inputs: the current trajectories, the dense points detected from the input frame, and the optic flows of the input frame. The main idea of this operator is to “grow” a trajectory, either an existing one or a new one starting at a dense point, by adding one more coordinate computed from the optic flow. Note that it is possible that the optic flow indicates that there is no more coordinate on this trajectory in the input frame, ending the trajectory. The parallelization of this operator is similar to that of the dense point extractor, except that each node is assigned trajectories rather than pixels and coordinates. Grouping of the trajectories is performed according to their last coordinates (or the newly identified dense points for new trajectories).
-
-
-
FPGA Based Image Processing Algorithms (Digital Image Enhancement Techniques) Using Xilinx System Generator
More LessFPGAs has many significant features that serves as a platform for processing real time algorithm. It gives substantially higher performance over programmable Digital Signal Processor (DSPs) and microprocessor. At present, the use of FPGA in research and development of applied digital systems for specific tasks are increasing. This is due to the advantages FPGAs has over other programmable devices. These advantages are high clock frequency, high operations per second, code portability, code libraries reusability, low cost, parallel processing, Capability of interacting with high or low interfaces, security and Intellectual Property (IP).
This paper presents concept of hardware digital image processing algorithms using field programmable gate array (FPGA). It focus on implementation an efficient architecture for image processing algorithms like image enhancement (point processing techniques (by using fewest possible System Generator Blocks. In this paper, Modern approach of ‘Xilinx System Generator’ (XSG) is used for system modeling and FPGA programming. Xilinx System Generator is a tool of matlab that generates bit stream file (*.bit), Netlist, timing and power analysis. Performance of these architectures implemented in FPGA card XUPV5-LX110T.
-
-
-
Alice-based Computing Curriculum for Middle Schools
Authors: Saquib Razak, Huda Gedawy, Don Slater and Wanda DannAlice is a visualization software for introducing computational thinking and programming concepts in the context of creating 3D animations. Our research aims to introduce computational thinking and problem solving skills in the middle schools in Qatar. To make this aim accessible, we have adapted the Alice software for a conservative Middle Eastern culture, developed curricular materials, and provided professional development workshops for teachers and students in the Middle East. There is a trend for countries, to evaluate curriculum from other cultures, and then try to bring the successful curriculum to their own school systems. This culture is a result of societies beginning to realize the importance of education and knowledge. Qatar's efforts towards building knowledge-based society and upgrading their higher education infrastructure are proofs of this realization. The challenge is to recognize that although a strong curriculum is necessary, simply porting a successful curriculum to a different environment is not sufficient to guarantee success. Here we share our attempt to take a tool with associated curriculum that has been very successful in several countries in the West, and apply it in an environment with very different cultures and social values.
The Alice ME project is targeted at middle school (grades 6–8) teachers and students in the Middle East. The overall goal of the project is to adapt the Alice 2 software to the local cultures, develop new instructional materials appropriate for local systems, and test the effectiveness of the Alice approach at the middle school level. The “Alice approach” – using program visualization to teach/learn analytic, logical, and computational thinking, problem solving skills and fundamental programming concepts in the context of animation – remains the same.
In the formative phase of this project, our goal was to understand the environment and local culture and evaluate the opportunities and challenges. The lessons learned in this phase are being used to formulate the future direction of our research. Although the Middle Eastern countries are rapidly modernizing, the desire to maintain traditional norms is strong. For this reason, we compiled two lists of models. One list was of existing models in the Alice gallery that are not appropriate for the Middle Eastern culture. Qatar (and Middle East in general) is a religious society that follows conservative norms in dress for both men and women. The second was a list of models that would be interesting and relevant to the Qatari society. These two lists helped us determine those models that might be modified, removed, or added to the gallery. We found that Qatar is a cultural society with a lot of emphasis on local customs. Local hospitality, religion, and traditional professions like pearl diving, fishing, and police officers have special place in society. We also discovered that people in general and boys in particular, have a special respect for camels and the desert.
We created the curriculum in collaboration with one private school and the Supreme Education Council. Creating artifacts in isolation and expecting the educational systems to adopt them is not a prudent approach. Due to this collaboration, we learned that a majority of existing ICT and computing curriculum is based on step-by-step instructions that students are expected to follow and reproduce. There is a lack of stress on student learning, creativity, and application.
Most ICT teachers in Qatar, both in public and private schools, are trained ICT professionals. At the same time, most of these teachers are not familiar with program visualization tools such as Alice and have not taught fundamental programming concepts in their classes. As a result, the need for professional development workshops is urgent. We have conducted several workshops for teachers to help them use Alice during the pilot study of the curriculum. During these workshops, we focus on two main concepts – learning to use Alice as a tool, and learning to teach computational thinking using Alice.
We have piloted the curriculum, instructional materials, and the new 3-D models in the Alice gallery for middle school students in one private English school and two public Arabic schools. The pilot test involved more than 400 students in the three schools combined. During the pilot testing, we conducted a survey to obtain initial feedback regarding the 3D models from the Middle East gallery (students have access to all models that are part of Alice's core gallery). Through these surveys, we learned that those objects that students use in everyday life were more popular when it came to using models in Alice.
As part of the curriculum to teach computational thinking using Alice as a tool, we have created several artifacts which are made available to the schools. These items include:
Academic plan for the year
Learning out comes for each lecture, PowerPoint presentation, class exercises, and assessment questions
Student textbook – One English book for grade 8, one Arabic textbook for grade 8 and one Arabic textbook for grade 11.
One of the most important skills that is essential for building the future generation is critical thinking. Although, we are currently only looking at the acceptability of the newly created 3-D models and usability of our curriculum and instructional material, we are still curious about the effectiveness of this curriculum in teaching computational thinking. We analyzed the results of an exam conducted at a local school and it was observed that students in grade 7 with Alice based curriculum performed better than those in grade 9 on the same exam. This exam was designed to measure the critical thinking skill in problem solving without any reference to Alice. We hope that this result is directly related to the students' experience with Alice, as it works on making the students think about the problem from different perspectives. We acknowledge that still more formal work needs to be done in order to support our hypothesis.
This academic year (2015–2016), Alice based curriculum is being used in Arabic in six independent schools and in English in four private English schools. There are more than 1400 students currently studying this content.
-
-
-
Towards a K-12 Game-based Educational Platform with Automatic Student Monitoring: “INTELLIFUN”
Authors: Aiman Erbad, Sarah Malaeb and Jihad Ja'amSince the twenty-first century, digital technologies are increasingly supporting teaching and learning activities. Because learning is effective when it starts early, advanced early years' educational tools are highly recommended to help new generations gain the necessary skills to successfully build opportunities and progress in life. With all the digital learning advances, there are still many problems that teachers, students and parents are facing. Students' learning motivation, problem solving ability remain weak while working memory capacity is found low for children under 11 years old which cause some learning difficulties, such as developmental coordination disorder, mathematics calculation, and language impairments. The latest PISA, Programme for International Assessment, shows that Qatar has seen the lowest scores compared to other countries with similar condition in mathematics, sciences and reading performance and is ranked 63rd of 65 countries involved, even though the Qatari GDP, General government expenditures, is high (OECD 2012).
Another problem affecting the educational experience for young children is family engagement. Parents need to be more involved in the learning process and have quick and timely detailed feedback about their children's progress in different topics of study. In fact, the schools days are limited and parents can play an important role in improving their children progress in learning and understanding concepts. The traditional assessment tools provide global grading usually by topics of study (e.g., Algebra). Parents need a grading system by learning skills (e.g., addition facts to 10, solving missing-number problems, subtraction of money) to have a clear view about the specific skills that their children need to improve. Finally, teachers need also an automated skills-based students monitoring tool to observe students' progress in correspondence with the learning objectives to focus on personalized tutoring tactic and take accurate decisions. Such a tool allows the teachers to focus more on the students' weaknesses and take the necessary actions to overcome with these problems.
Recent studies showed that students can become more motivated to learn with game-based learning tools. These interactive elements facilitate problem solving and make learning new concepts easier and encourage the students to work harder at school and also at home. Active learning using game-based model guarantees long-term retention of information which help the students increase their exam scores while acquiring the needed skills appropriately. We have conducted a survey and analyzed the features of 31 leading existing technologies in the digital learning industry. We found that only 21 of them offer educational games, 22 are dedicated for elementary age range, 15 offer digital resources to support mathematics and sciences, 11 consider some digital tools for assessment to test children skills and 6 include automated progress reporting engine where most of them need manual data entry support from teachers. There is a need for a complete solution of a game-based learning platform with automatic performance and progress reporting without any manual intervention, and in particular customized to fit with elementary schools curriculum standards.
We developed an educational platform called ‘IntelliFun’ that uses educational games to automatically monitor and asses the progress of elementary schools children. It can be applied to wider scope of course using outcome-based learning using games. Our intelligent game-based ‘IntelliFun’ platform provides a potential solution for many serious issues faced in education. Its entertainment gaming features improve students' learning motivation, problem solving ability and working memory capacity. In parallel, its students' performance monitoring features empower family engagement. Having these features integrated in one technology, makes ‘IntelliFun’ a novel creative solution in digital education.
To generate students' outcomes while playing the games, we have to use an effective technology to relate the curriculum standards and the learning objectives with the game worlds' content (i.e., scenes and activities). The technology we have used is an ontology-based approach. We have designed a new ontology model to map the programs curriculums and learning objectives with the flow-driven game worlds' elements. The children' performance is evaluated through the ontology using information extraction with an automated reasoning mechanism that is guided by a set of inference rules. Technically, using ontologies in the field of education and games is very challenging and our data model forms a novel solution to two issues:
• The complexity of designing educational data models where learning objectives and curriculum standards are matched and incorporated in serious games, and
• The complexity of providing advanced reasoning over the data.
This allows the fusion of many challenging technologies: digital education, semantic web, games, monitoring systems and artificial intelligence.
Our work is deeply rooted in the state of the art in educational games and digital education systems. The curriculum ontology was inspired by the British Curriculum Ontology (BBC 2013). The instances related to learning objectives are extracted from the elementary curriculum standards of Supreme Education Council of Qatar. Ontology model in games follows story-based scenarios described with Procedural Content Generation (Hartsook 2011) and HoloRena (Juracz 2010). We used the trajectory trace ontology described in STSIM (Corral 2014) to design the student monitoring ontology. To evaluate student's performance, we used inference rules based-reasoning engine to query correct, incorrect and incomplete actions performed by the player as described in Ontology-based Information Extraction (Gutierrez 2013). To measure the learner's performance, certain key indicators should feed the reasoning engine which executes appropriate calculation methods.
The platform is implemented in a 3-tier architecture where mobile game applications are used. These games can query and update the ontology in real time through a web service by invoking data management, reasoning, monitoring and reporting operations using Apache Jena Ontology API. The platform can be used to dynamically generate the content of the games based on the children' preferences and acquired knowledge. The platform monitoring features allow the teachers to focus on the children' achievement of every learning objective and empower also the parents' engagement in their children's learning experience. In fact, they can follow up the children and know their weaknesses and strengths. ‘IntelliFun’ is used to improve the children's learning outcomes and keep them motivated while playing games.
We aim to start testing our platform with real users. We will use the mathematics curriculum for grade 1 as case study. Our user study will include students, parents and teachers who will answer an evaluating questionnaire after testing the technology. This will help us evaluate the efficacy of the platform and ascertain its benefits by analyzing its impacts in improving students' learning experience. An interesting research direction to take into consideration in future work is the use of data mining techniques in the reasoning engine to evaluate students' performance with complex performance key indicators. We can also consider dynamic generation of game worlds' content based on students' preferences and acquired learning skills.
-
-
-
Enhancing Information Security Process in Organisations in Qatar
More LessDue to the universal use of technology and its pervasive connection to the world, organisations have become more exposed to frequent and various threats (Rotvold, 2008).Therefore, organisations today are giving more attention to information security as it has become a vital and challenging issue. Mackay (2013) noted that the significance of information security, particularly information security policies and awareness, is growing due to the increasing use of IT and computerization. Accordingly, information security presents a key role in the internet era of technology. Gordon & Loep (2006) stated that information security involves a group of actions intended to protect information and information systems. It involves software, hardware, physical security and human factors, where each element has its own features. Information security not only secures the organisation's security but the complete infrastructure that enables the information's use. Organisations are facing an increase in daily security breaches, especially information that is more accessible to the public as the threat becomes greater. Therefore security requirements need to be tightened.
Information security policies control employees' behavior as well as securing the use of hardware and software. Organisations benefit from implementing information security policies as it helps them to classify their information assets and define the importance of the information assets to the organisation (Canavan, 2003). Information security policy as a number of principles, regulations, methodologies, procedures and tools created to secure the organisation from threats. Boss and Kirsch (2007) stated that employees' compliance with information security polices has become an important socio-organizational resource. Information security policies are applied in organisations to provide the employees with guidelines to guarantee information security.
Herold (2010) expressed the importance for organisations to have constant training programmes and educational awareness to attain the required result from the implementation of an information security policy. Security experts' emphasise the importance of security awareness programmes and how they improve information security as a whole. Nevertheless, implementing security awareness in organisations is a challenging process as it requires actively interacting with an audience that usually does not know the importance of information security (Manke, 2013). Organisations tend to use advanced security technologies and constantly train their security professionals, while paying little attention on enhancing the security awareness of employees and users. This makes employees and users the weakest link in any organization (Warkentin & Willison, 2009).
In the last ten years, the state of Qatar has witnessed remarkable growth and development of its civilization, having embraced information technology as a base for innovation and success. The country has perceived tremendous improvement in the sectors of health care, education and transport (Al-Malki, 2015). Information technology plays a strategic role in building the country's knowledge based economy. Due to the country's increasing use of internet and being connected to the global environment, Qatar needs to adequately address the global threats arising from the internet. The global role of Qatar in world politics has led Qatar to not just face the traditional threats from hackers, but more malicious performers such as terrorists, organized criminal networks and foreign government spying. Qatar has faced a lot of discomfort with some countries which try to breach the county's security. Qatar Computer Emergency Response Team (Q-CERT) who is responsible of addressing the state's Information Security needs stated “As Qatar's dependence on cyberspace grows, its resiliency and security become even more critical, and hence the needs for a comprehensive approach that addresses this need” (Q-CERT, 2015). Therefore Q-CERT established National Information Assurance policy (NIA), which is an information security policy designed to help both government and private sectors in Qatar, to protect their information and enhance their security. Nevertheless the NIA policy has not been implemented still in any organization in Qatar. This is due to the barriers and challenges of information security in Qatar such as culture and awareness, which make the implementation of information security policies a challenging approach.
As a result, the scope of this research is to investigate information security in Qatar. There are many solutions for information security, some are technical and others are non-technical, such as security policies and information security awareness. This research focusses on enhancing information security through non-technical solutions, in particular information security policy. The aim of this research is to enhance information security in organizations in Qatar by developing a comprehensive Information Security Management System (ISMS) that considers the country-specific and cultural factors of Qatar. ISMS is a combination of policies and frameworks which ensure information security management (Rouse, 2011). This information security management approach is unique to Qatar as it considers Qatar culture and country specific factors. Although there are a lot of international information security policies available, such as ISO27001 but this research shows that sometimes these do not address the security needs particular to the culture of the country. Therefore there was a need to define a unique ISMS approach for Qatar.
To accomplish the aim of this research the following objectives must be achieved.
1. To review literature on information security in general and in Qatar in particular.
2. To review international and local information security standards and policies.
3. To explore the NIA policy in Qatar and compare it with others in the region and internationally.
4. To define problems with implementing information security policies and NIA policy in particular in organisations in Qatar.
5. To provide recommendations for the new version of the NIA policy.
6. To assess the awareness of employees on information security.
7. To assess the information security process in organisations in Qatar.
8. To identify the factors which affect information security in Qatar including culture and country specific factors.
9. To propose an ISMS for Qatari organisations taking into consideration the above factors.
10. To define a process for organisations to maintain the ISMS.
11. To evaluate the effectiveness of the proposed ISMS.
To achieve the aim of this research, different research methodologies, strategies and data collection methods will be used, such as literature review, surveys, interviews and case study. The research undergoes three phases, currently the researcher has completed phase one of the research which analyses the field of information security and highlights the gaps in the literature that can be investigated further in this research. It also examines the country factors that affect information security and the implementation of such information security policies. While undertaking interviews with experts in the field of information technology, information security, culture and law, to identify the situation of Information Security in Qatar, and the factors which might affect the development of Information Security in Qatar including the cultural effect, legal and political issues. In the following two years, the researcher will complete phase two and three of the research. During phase two the researcher will measure the awareness of employees and their knowledge of information security and information security policies in particular. The finding will help the researcher in completing phase three which involves investigating further the NIA policy and a real implementation of ISMS in an organisation in Qatar, and analyses the main findings to finally providing recommendations for improving NIA policy.
In conclusion, the main contribution of this research is to investigate the NIA policy and the challenges facing its implementation, and then define an ISMS process for the policy to assist organisations in Qatar in implementing and maintaining the NIA policy. The research is valuable since it will perform the first real implementation of the NIA policy in an organisation in Qatar taking advantage of the internship the researcher had with ICT. The research will move the policy from paper-based form into a real ISMS system and oversees it in reality in one of the organizations.
Information security, National Information Assurance policy, Information Security Management System, Security Awareness, Information Systems
References
Rotvold, G. (2008). How to Create a Security Culture in Your Organization. Available: http://content.arma.org/IMM/NovDec2008/How_to_Create_a_Security_Culture.aspx. Last accessed 1st Aug 2015.
Manke, S. (2013). The Habits of Highly Successful Security Awareness Programs: A Cross-Company Comparison. Available: http://www.securementem.com/wp-content/uploads/2013/07/Habits_white_paper.pdf. Last accessed 1st Aug 2015.
Al-Malki. (2015). Welcome to Doha, the pearl of the Gulf. Available: http://www.itma-congress-2015.com/Welcome_note_2.html. Last accessed 4th May 2015.
Rouse, M. (2011). Information security management system (ISMS). Available: http://searchsecurity.techtarget.in/definition/information-security-management-system-ISMS. Last accessed 22th Aug 2015.
Q-CERT. (2015). About Q-CERT. Available: http://www.qcert.org/about-q-cert. Last accessed 1st Aug 2015.
Warkentin, M., and Willison, R. (2009). “Behavioral and Policy Issues in Information Systems Security: The Insider Threat,” European Journal of Information Systems (18:2), pp. 101–105.
Mackay, M. (2013). AN EFFECTIVE METHOD FOR INFORMATION SECURITY AWARENESS RAISING INITIATIVES. International Journal of Computer Science & Information Technology. 5 (2), p 63–71.
Gordon, L. A. & Loep, M. P. (2006). Budgeting Process for Information Security Expenditures. Communications of the ACM. 49 (1), p 121–125.
Herold. R (2010). Managing an Information Security and Privacy Awareness and Training Program. New York: CRC Press.
Boss, S., & Kirsch, L. (2007). The Last Line of Defense: Motivating Employees to Follow Corporate Security Guidelines. International Conference on Information Systems. unknown (unknown), p 9–12.
Canavan, S. (2003). An Information Security Policy Development Guide for Large Companies. SANS Institute.
-
-
-
Visible Light Communication for Intelligent Transport Systems
Authors: Xiaopeng Zhong and Amine BermakIntroduction
Road safety is a world-wide health challenge that is of great importance in Qatar. According to WHO [1], global traffic fatalities and injuries are in the millions per year. Qatar has one of the world's highest rate of traffic fatalities, which causes more deaths than common diseases [2]. Traffic congestion and vehicle fuel utilization are two other major problems. Integrating vehicle communication into intelligent transport systems (ITS) is important as it will help improve road safety, efficiency and comfort by enabling a wide variety of transport applications. Radio frequency communication (RFC) technologies do not meet the stringent transport requirements due to spectrum scarcity, high interference and lack of security [3]. In this work, we propose an efficient and low-cost visible light communication (VLC) system based on CMOS transceivers for vehicle-to-vehicle (V2V) and infrastructure-to-vehicle (I2V) communication in ITS, as a complementary platform to RFC.
Objective
The proposed VLC system is designed to be low cost and efficient, supporting various V2V and I2V communication scenarios as shown in Fig. 1. The VLC LED transmitters (Tx) are responsible for both illuminating and information broadcasting. They are designed to support various existing transport infrastructures (such as street lamps, guideboards and traffic lights) as well as vehicle lights, with low cost and complexity. The receivers (Rx) will be available on both the front and back sides of vehicles with both vision and communication capabilities. Robustness of communication is enhanced by the added vision capability.
System implementation
The VLC system implementation in Fig. 2 is an optimized joint design of the transmitter, receiver and communication protocol. The LED transmitter will focus on the design of LED driver with efficient combination of illumination and communication modulation schemes. Light sensor is integrated to provide adaptive feedback for better power efficiency. Polarization techniques are utilized to cancel background light so as to not only enhance image quality but also improve robustness of VLC, as shown in Fig. 3.(a) [4]. A polarization image sensor using liquid crystal micro-polarimeter array has been designed as illustrated in Fig. 3. (b). The CMOS visible light receiver will be designed based on traditional CMOS image sensor but with innovative architecture specifically for V2V and I2V VLC. It features dual readout channels, namely, a compressive channel for image capture and a high-speed channel for VLC. Novel algorithms for detection and tracking are used to improve communication speed, reliability and security. Compressive sensing is applied for image capture. The compression is facilitated by a novel analog-to-information (AIC) conversion scheme which leads to significant power savings in image capture and processing. A prototype AIC based image sensor has been successfully implemented as shown in Fig. 4 [5]. A VLC protocol is specifically tailed for V2V and I2V based on the custom transceivers. The PHY layer is designed based on MIMO OFDM and the MAC layer design is based on dynamic link adaption. The protocol is to be an extension and optimization of IEEE 802.15.7 standard for V2V and I2V VLC. A preliminary prototype VLC system has been designed to verify the feasibility. A Kbps-level VLC channel has been achieved under illumination levels from tens to hundreds of lux. It's anticipated better improvement will be obtained with further research using the novel techniques described above.
Conclusion
An efficient and low-cost visible light communication system is proposed for V2V and I2V VLC, featuring low cost and power-efficient transmitter design, dual-readout (imaging and VLC) receiver architecture, fast detection and tracking algorithms with compressive sensing, polarization techniques and specific communication protocol.
References
[1] Global status report on road safety 2013, World Health Organization (WHO).
[2] Sivak, Michael, “Mortality from road crashes in 193 countries”, 2014.
[3] Lu, N.; Cheng, N.; Zhang, N.; Shen, X.S.; Mark, J.W., “Connected Vehicles: Solutions and Challenges,” IEEE Internet of Things Journal, vol. 1, no. 4, pp. 289–299, Aug. 2014.
[4] X. Zhao, A. Bermak, F. Boussaid and V. G. Chigrinov, “Liquid-crystal micropolarimeter array for full Stokes polarization imaging in visible spectrum”, Optics Express, vol. 18, no. 17, pp. 17776–17787, 2010.
[5] Chen, D.G.; Fang Tang; Law, M.-K.; Bermak, A., “A 12 pJ/Pixel Analog-to-Information Converter Based 816 × 640 Pixel CMOS Image Sensor,” IEEE Journal of Solid-State Circuits, vol. 49, no. 5, pp. 1210–1222, 2014.
-
-
-
A General Framework for Designing Sparse FIR MIMO Equalizers Based on Sparse Approximation
Authors: Abubakr Omar Al-Abbasi, Ridha Hamila, Waheed Bajwa and Naofal Al-DhahirIn broadband communications, the long channel delay spread, defined as the duration in time, or samples, over which the channel impulse response (CIR) has significant energy, is too long and results in a highly-frequency-selective channel frequency response. Hence, a long CIR can spread over tens, or even hundreds, of symbol periods and causes impairments in the signals that have passed through such channels. For instance, a large delay spread causes inter-symbol interference (ISI) and inter-carrier interference (ICI) in multi-carrier modulation (MCM). Therefore, long finite impulse response (FIR) equalizers have to be implemented at high sampling rates to avoid performance degradation. However, the implementation of such equalizers is prohibitively expensive as the design complexity of FIR equalizers grows proportional to the square of the number of nonzero taps in the filter. Sparse equalization, where only few nonzero coefficients are employed, is a widely-used technique to reduce complexity at the cost of a tolerable performance loss. Nevertheless, reliably determining the locations of these nonzero coefficients is often very challenging.
In this work, we first propose a general framework that transforms the problem of design of sparse single-input single-output (SISO) and multiple-input multiple-output (MIMO) linear equalizers (LEs) into the problem of sparsest-approximation of a vector in different dictionaries. In addition, we compare several choices of sparsifying dictionaries under this framework. Furthermore, the worst-case coherence of these dictionaries, which determines their sparsifying effectiveness, are analytically and/or numerically evaluated. Second, we extend our framework to accommodate SISO and MIMO non-linear decision-feedback equalizers (DFEs). Similar to the sparse FIR LEs design problem, the design of sparse FIR DFEs can be cast into one of sparse approximation of a vector by a fixed dictionary whose solution can be obtained by using either greedy algorithms, such as Orthogonal Matching Pursuit (OMP), or convex-optimization-based approaches, with the former being more desirable due to its low complexity. Third, we further generalize our sparse design framework to the channel shortening setup. Channel shortening equalizers (CSEs) are used to ensure that the cascade of a long CIR and the CSE is approximately equivalent to a target impulse response (TIR) with much shorter delay spread. Channel shortening is essential for communication systems operating over highly-dispersive broadband channels with large channel delay spread. Fourth, as an application of recent practical interest for power-line communication (PLC) community, we consider channel shortening for the impulse responses of medium-voltage power-lines (MV-PLs) with length of 10 km and 20 km to reduce the cyclic prefix (CP) overhead in orthogonal frequency-division multiplexing (OFDM) and, hence, improves the data rate accordingly. For all design problems, we propose reduced-complexity sparse FIR SISO and MIMO linear and non-linear equalizers by exploiting the asymptotic equivalence of Toeplitz and circulant matrices, where the matrix factorizations involved in our design analysis can be carried out efficiently using the fast Fourier transform (FFT) and inverse FFT with negligible performance loss as the number of filter taps increases.
Finally, the simulation results show that allowing for a little performance loss yields a significant reduction in the number of active filter taps, for all proposed LEs and DFEs design filters, which in turn results in substantial complexity reductions. The simulation results also show that the CIRs of MV-PLs with length of 10 km and 20 km can be shortened to fit within the broadband PLC standards. Additionally, our simulations validate that the sparsifying dictionary with the smallest worst-case coherence results in the sparsest FIR filter design. Furthermore, the numerical results demonstrate the superiority of our proposed approach compared to conventional sparse FIR filters in terms of both performance and computational complexity. Acknowledgment: This work was made possible by grant number NPRP 06-070-2-024 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.
-
-
-
Novel Vehicle Awareness Measure for Secure Road Traffic Safety Applications
Authors: Muhammad Awais Javed and Elyes Ben HamidaFuture intelligent transport systems (ITS) are envisaged to offer drivers with a safer and comfortable driving experience by using wireless data exchange between vehicles. A number of applications could be realized with the increased vehicle vision and awareness provided by this technology known as Vehicular Ad hoc Network (VANETs). These applications include cooperative awareness, warning notification, safe lane change and intersection crossing, intelligent route selection, traffic management, parking selection, multi-player games and internet browsing.
The success of VANETs and its proposed applications depend on secure and reliable message transmission between the vehicles. Every vehicle broadcasts periodic safety messages to the neighborhood traffic to inform about its presence. This safety message contains vehicle's mobility information including its location, speed, direction, heading etc. Based on these safety messages, vehicles develop a local dynamic map (LDM) that provides them a complete description of the surrounding traffic. Using LDM, vehicles could look beyond line of sight and make safe and intelligent driving decisions.
An increased level of vehicle safety awareness is the primary goal for road safety applications. An accurate measure of this awareness is critical to evaluate impact of different parameters such as security, vehicle density etc. on vehicle safety and application quality of service. A precise and correct metric for safety awareness of vehicles should take into account the knowledge of vehicle's surrounding and accuracy of received information in CAM and LDM. Existing metrics in the literature utilize quantitative measure of awareness such as packet delivery ratio and do not consider accuracy and fidelity of received information in the LDM. Due to GPS error and outdated information in the LDM, vehicles could have a reduced level of awareness resulting in dissemination of false positives and false negatives that could badly impact road safety applications.
In this paper, we propose two novel metrics for evaluating vehicle safety awareness. These metrics start by using our proposed vehicle heading based filtering mechanism to only consider the critical neighbors in the surrounding (i.e., the ones that are moving towards a vehicle and have a chance to collide with it) for calculating awareness. The first metric known as Normalized Error based Safety Awareness Level (SAL) calculates awareness by measuring the number of neighbors a vehicle has successfully discovered in its LDM and a normalized distance error that is calculated based on actual position of each neighbor and its position information that is available in the LDM. By considering the position error in the information contained in the LDM, vehicles accurately measure their awareness levels.
To further improve the above safety awareness metric, we propose a weighted Normalized Error based Safety Awareness Level (wSAL) metric that assigns higher weight to error coming from neighbor vehicles that are nearby using a sigmoid function. Since position error of a closer neighbor is more critical in safety applications, vehicle awareness level could be more accurately measured by allocating a higher importance to them.
We developed a simulation model using NS-3 network simulator and SUMO traffic simulator to generate realistic road traffic scenario at different vehicle densities. Simulation results verify that the existing metrics provide optimistic results for vehicle awareness and our proposed metrics improve the measure of awareness. This leads to a better performance evaluation of safety applications.
-
-
-
Energy Efficient Antenna Selection for a MIMO Relay Using RF Energy Harvesting
Authors: Amr Mohamed and Islam SamyDue to rapid growth in traffic demands and the number of subscribers, the transmit energy consumption becomes critical, both environmentally and economically. Increasing energy efficiency for wireless networks is the main goal for the 5G network research. The research community has proposed promising solutions supporting green communication techniques. However, energy efficiency can also be enhanced in a different way. We can get energy from the renewable sources, which can compensate (totally or partially) the traditional power consumption from the power grid. Energy harvesting has emerged as a promising technique which helps to increase the sustainability of wireless networks. In this paper, we investigate energy efficient antenna selection schemes for a MIMO relay powered by a hybrid energy source (from grid or through RF energy harvesting). We try to utilize the large number of antennas efficiently for both data decoding and energy transfer. Then, we formulate an optimization problem and provide the optimum antenna selection scheme such that the joint power consumption (source and relay power) is minimized while meeting the rate requirements. The problem is categorized as a mixed non-linear integer and non-convex, i.e., prohibitively complex. We propose two special cases of the general problem, Fixed Source Power Antenna Selection (FSP-AS), in which we assume a fixed source power and control the antenna selection only, and All Receive Antenna Selection (AR-AS), in which we choose to turn all receiving antennas ON. Finally, we introduce two less complex heuristics, Decoding Priority Antenna Selection (DP-AS) and Harvesting Priority Antenna Selection (HP-AS). Finally we compare our work with the Generalized Selection Combiner (GSC) scheme used in some previous works.
The main contributions of our work can be summarized as follows:
(1) We introduce the energy harvesting technique as an effective way to improve the energy efficiency by using it as a substitute for the grid energy.
(2) In addition to the transmitted energy, we model the circuit power as an important part of the total energy consumption which can affect the energy efficiency.
(3) We make a possibility to turn each antenna ON or OFF individually, so we can turn off only the antennas we don't need to save the energy as much as possible.
(4) We introduce two special case schemes, each of them care about a special type of energy consumption, FSP-AS scheme cares more about the circuit energy, while the
AR-AS concentrates mainly on the transmitted energy.
(5) We also propose two heuristics to accommodate the complexity of the target problem. We evaluate the performance for the proposed schemes numerically. Our key performance indicator (KPI) is the joint power consumed in both the source and the relay. The simulation results show the gain of our optimal scheme in terms of energy efficiency, which can be up to 80% as compared to solutions proposed in the literature. Our developed heuristics show reasonable performance at small rate with almost no gap with the optimal scheme at higher target rates. In our future work, we will consider modeling more than one source and destination nodes and extend this model to include interference scenarios.
-
-
-
Green Techniques for Environment-Aware 5G Networks
Authors: Hakim Ghazzai and Abdullah KadriOver the last decade, mobile communications have been witnessing an unprecedented rise of mobile user demand that is perpetually increasing due to the introduction of new services requiring extremely fast and reliable connectivity. Moreover, there is an important increase of the number of devices connected to cellular networks because of the emergence of the machine-type communication and internet of things. Indeed, data traffic on mobile networks is increasing at a rate of approximately 1.5 to 2 times a year, therefore mobile networks are expected to handle up to 1000 times more data traffic in 10 years time. Because of this huge number of wireless terminals, in addition to the deployed radio access networks (RANs) necessary to serve them, future fifth-generation (5G) cellular networks will suffer from an enormous growth of energy consumption that will cause negative economical and environmental impacts. It is predicted that if no actions are taken, the greenhouse gas (GHG) emissions per capita for ICT are estimated to increase from 100 kg in 2007 to about 130 kg in 2020. Therefore, there is an urgent obligation to develop new techniques and technologies in order to cope up with the exponential energy growth and correspondingly the carbon emission of emerging wireless networks. From a cellular network operator perspective, reducing fossil fuel consumption is not only for behaving green and responsible towards the environment, but also for solving an important economical issue that cellular operators are facing. Indeed, such energy consumption forces mobile operators to pay huge energy bills which actually constitute around the half of their operating expenditures (OPEX). It was shown that, currently, cellular networks consume around 120 TWh of electricity per year and mobile operators pay around 13 billion dollars to serve 5 billion connections per year.
Therefore, there is a growing necessity to develop more energy-efficient techniques to enhance their green performance while respecting the user's quality of experience. Although most of the proposed studies were focusing on individual physical layer power optimizations, more sophisticated and cost-effective technologies should be adopted to meet the green objective of 5G cellular networks. This study investigates three important techniques that could be exploited separately or together in order to enable the wireless operators achieve significant economic benefits and environmental savings:
- Cellular networks powered by the smart grid: Smart grid is widely seen as one of the most important means that enhance energy savings and help optimize some of consumers' green goals. It can considerably help in reducing GHG emissions by optimally controlling and adjusting the consumed energy. Moreover, it allows the massive integration of intermittent renewable sources and offers the possibility to deliver electricity in a more cost-effective way with active involvement of customers in the procurement decision. Therefore, introducing the concept of smart grid as a new tool for managing the energy procurement of cellular networks is considered as an important technological innovation that would significantly contribute to the reduction of mobile CO2 emissions.
- Base station sleeping strategy: Several studies show that over 70% of the power is consumed by base stations (BSs) or long term evolution eNodeB (LTE-eNB) for 4G networks. Turning off redundant or lightly loaded BSs during off-peak hours can contribute to the reduction of mobile network energy consumption and GHG emissions.
- Green networking collaboration among competitive mobile operators: The fundamental idea was to completely turn off the equipment of one service provider and serve the corresponding subscribers by infrastructure belonging to another operator. However, random collaboration may lead to the increase of certain mobile operator's profit at the expense of other competitive operators. This can cause a high energy consumption and a very low profit for the active network. Therefore, fairness criteria should be introduced for this type of collaboration.
In this study, we present in detail the techniques described above and provide multiple simulation results measuring the gain that could be obtained using these techniques compared to that of traditional scenarios.
-
-
-
Vibration Energy Harvesting in Wireless Sensor Networks (WSNs) for Structural Health Monitoring (SHM)
Authors: Loay Ismail, Sara Elorfali and Tarek ElfoulyHarvesting of vibration energy from the ambient environment, such as vibrations experienced by bridges due to vehicle movements, wind, earthquakes, has become an essential area of study by many scientists aiming to design new systems which can improve self-powered network sensors in wireless sensor networks (WSN), thus providing a more efficient system that does not require the human involvement.
One of the essential components of WSN systems is the sensor node. It is used to continuously send/receive information to monitor a certain behavior targeted by the application; for example, to monitor bridge infrastructure's health. Sometimes, sensors are programmed and adjusted to send useful data for monitoring 24 hours a day, seven days a week. This configuration harms the sensors' batteries and shortens their lives, since sending/receiving data consumes power and leads to the reduction of the batteries' voltage levels. Due to this fact, energy harvesting is critical to maintaining long-term batteries that can recharge themselves from the available ambient harvested energy and eliminate the need for human involvement in replacing or recharging them in their specified locations in the network.
Recent structural health monitoring systems (SHM), in civil infrastructure environments, have focused heavily on the use of wireless sensor networks (WSNs) due to their efficient use of wireless sensor nodes. Such nodes can be fixed onto any part of the infrastructure, such as bridges, to collect data remotely for monitoring and further processing. However, the drawback of using such sensor networks relies mainly on the finite life-time of their batteries. Due to this problem, the concept of harvesting energy from the ambient environment became more important. Ensuring efficient battery usage would have a great benefit in maximizing overall systems functionality time and ensures efficient use of natural energy resources like solar, wind and vibration energies.
This work aims to study the feasibility of using a piezoelectric vibration energy harvester to extend overall battery life using a single, external, super-capacitor component which is serving as a storage unit for the harvested energy. The methodology followed in this work states the general direction of the flow of energy in a sensor node which can be summarized into the following:
1 Piezoelectric Vibration Energy Harvester: This module was used to convert mechanical energy of the vibrations from the ambient environment to electrical energy.
2 Energy Harvesting Circuit: This circuit is responsible for the power conditioning, enabling the circuit to output energy to the sensors under certain threshold criteria.
3 Energy Storage: This the super-capacitor served to store harvested energy.
4 Energy Management Scheme: The scheme proposed by this work under the energy requirements and constraints of the sensor nodes in order to conserve batteries voltage level to extend sensors' batteries lives.
5 Wireless Sensors Nodes: Each sensor node type has specific energy requirements that must be recognized so that it can be adequately powered and turned on using the harvested energy.
The main contribution of this work is a proposal of an energy management scheme which ensures that the harvested energy being provided to the harvester circuit must be greater than the energy output that is going to be consumed by the sensor. This proposed scheme has proved the feasibility of using impact vibrations for efficient energy harvesting and subsequently increase the battery life time needed to turn on the wireless sensor nodes.
Furthermore, as a future direction of work, to increase the amount of harvested energy, hybrid power sources can be explored by combining more than one energy source from the ambient environment, such as solar and vibration energy.
-
-
-
Design of a Novel Wireless Communication Scheme that Jointly Supports Both Coherent and Non-Coherent Receivers
Authors: Mohammad Shaqfeh, Karim Seddik and Hussein AlnuweiriAs well known, wireless channels are characterized by temporal and spatial variations due to different reasons such as multipath propagation of the signals, and mobility of the communicating devices or their surrounding environment. This has two consequences. First, the channel quality (i.e. amplitude) varies resulting in changes in the amount of data rate (in bits/sec) that can be received reliably over the channel. Second, the channel “phase” varies, which necessities the ability of the receiver to track these changes reliably in order to demodulate the transmitted signal correctly. This is called “coherent” transmission. If the receiver is unable to track the phase variations, then the transmitter should use “non-coherent” modulation schemes, which can be detected without phase knowledge at the receiver. However, this will be at the cost of significant degradation of the data rate that can be transmitted reliably. Therefore, modern communication systems are supported with channel estimation algorithms in order to enable coherent reception. However, this is not always feasible. Channel estimation is usually accomplished by transmitting pilot signals with some frequency. Depending on the frequency of pilot transmission and the channel coherence time, some receivers might have reliable channel estimates and other receivers might not have that reliable channel estimates. This is one reason why each mobile wireless standard supports some maximum velocities for the mobile users, limited by the frequency of pilot transmission. Mobile users moving at higher speeds might not have reliable channel estimates and this means that they will not be able to receive any information via coherent transmission.
Having this in mind, we are mainly interested in this work in broadcasting systems like mobile TVs. These systems are usually transmitted using coherent modulation schemes in order to enable good quality of reception which cannot be maintained using the “low-rate” non-coherent modulation schemes. Therefore, mobile users with unreliable channel estimates will not be able to receive such applications, while the users with reliable channel estimates can receive the transmitted stream reliably. Therefore, broadcasting applications are characterized by “all or nothing” reception depending on the mobility and the channel conditions of the receiving terminals.
Alternatively, we propose a layered coding scheme from a new viewpoint that has not been addressed before in the literature. We propose a layered transmission scheme with two layers, one layer, base-layer (non-coherent-layer), that can be decoded by any receiver even if it does not have reliable channel estimates, and the other, refining-layer (coherent-layer) that can be only decoded at receivers with reliable channel estimates. The basic bits could be transmitted on the first layer and the extra bits that improve quality could be transmitted on the second layer. Therefore, receivers with unreliable channel estimates can decode the non-coherent layer and the receivers with reliable channel estimates can decode all the information and experience better service quality.
This proposed scheme can be designed using multi-resolution broadcast space-time coding, which allows the simultaneous transmission of low rate (LR) non-coherent information for all receivers, including those with no channel state information (CSI), and high rate (HR) coherent information to those receivers that have reliable CSI. The proposed scheme ensures that the communication of the HR layer is transparent to the underlying LR layer. We can show that both the non-coherent and coherent receivers achieve full diversity, and that the proposed scheme achieves the maximum number of communication degrees of freedom for non-coherent LR channels and coherent HR channels with unitarily-constrained input signals.
-
-
-
Wearable D2D Routing Strategies for Urban Disaster Management – A Case Study
Authors: Dhafer Ben Arbia, Muhammad Mahtab Alam, Rabah Attia and Elyes Ben HamidaCritical and public safety operations require real-time communication from the incident area(s) to the distant operations command center going through the evacuation and medical support areas. Data transmitted throughout such type of network is extremely useful for decisions makers and operations' conducting. Therefore, any delay in communication may cause lives' loss. Above all, the existing infrastructure communication systems (PSTN, WiFi, 4/5 G, etc.) can be damaged and is often not available solution. An alternate option is to deploy autonomous tactical network at unpredictable location and time. However, in this context there are many challenge especially how to effectively rout or disseminate the information. In this paper, we present behavior of varied multi-hops routing protocols evaluated in a disaster-simulated scenario with different communication technologies (i.e. WiFi IEEE 802.11; WSN IEEE 802.15.4; WBAN IEEE 802.15.6). Studied routing strategies are classified: Ad hoc proactive and reactive protocols, geographic-based and gradient-based protocols. To be realistic, we have conducted our simulations by considering a Mall in Doha city in the State of Qatar. Moreover, we have generated a mobility trace to model the rescue teams and crowd movements during the disaster. In conclusion, extensive simulations showed that WiFi IEEE 802.11 is the best wireless technology to consider in an emergency urban with the studied protocols. On the other hand, gradient based routing protocol performed much better especially with WBAN IEEE 802.15.6.
Keyword: Tactical Ad-hoc Networks; Public Safety and Emergency; Routing Protocols; IEEE 802.11; IEEE 802.15.4; IEEE 802.15.6; Performance Evaluation
I. Introduction
Public safety is a worldwide governments' concern. It is a special continuous reactive set of studies, operations and actions in order to predict plan and perform a successful reaction in a disaster case. Coupled with the raise of number and variety of disasters, not only the economies and infrastructures are affected, but significant number of human lives. With regards to the emergency response to these disasters, the role of existing Public Safety Network (PSN) infrastructures (e.g. TETRA, LTE) is extremely vital. However, it is anticipated that, during and after the disasters, existing PSN infrastructures can be flawed, oversaturated or completely damaged. Therefore, there is a growing demand for ubiquitous emergency response system that could be easily and rapidly deployed at unpredictable location and time. Wherefore Wearable Body Area Networks (W-BAN) is a relevant candidate that can play a key role to monitor the physiological status of involved workforces and the affected individuals. Composed by small and low-power devices connected to a coordinator, WBAN communication architecture relies on three levels: On-Body (or intra-Body), Body-to-Body (or inter-Body) and off-Body communication networks.
In disaster scenarios, the network connectivity and data is a challenging problem due to the dynamic mobility and harsh environment [1]. It is envisioned that, in case of unavailable or out-of-range network infrastructures, the WBANs coordinators along with WBANs sensors can exploit cooperative and multi-hop body-to-body communications to extend the end-to-end network connectivity. The Opportunistic and Mobility Aware Routing (OMAR) scheme is an on-body routing protocol proposed in one of our earlier works [2].
A realistic mobility model is also a major challenge related to the simulations. To the best knowledge of the authors, no comparable study in disaster context is conducted by considering a realistic disaster mobility pattern.
In this paper, we investigate varied classes of multi-hop Body-to-Body routing protocols using a realistic mobility modeling software provided by Bonn University in Germany [3]. The mobility pattern is exploited by the WSNET simulator as a mobility trace of the nodes moving during and after the disaster. Note here that individuals are considered mobile nodes in the scenario. In the conducted simulations, each iteration, one communication technology configuration is selected (i.e. WiFi IEEE 802.11; WSN IEEE 802.15.4; WBAN IEEE 802.15.6), then simulations are ran with the routing protocols (i.e. proactive, reactive, gradient-based and geographical-based). This strategy provides a vision not only on the behavior of the routing protocols in the disaster context, but evaluates the communication technologies in such case also. For proactive, reactive, gradient-based and geographical-based routing protocols, Optimized Link State Routing protocol version 2 (OLSRv2) [4], Ad-hoc On-Demand Distance Vector (AODVv2) [5], Directed Diffusion (DD) [6] and Greedy Perimeter Stateless Routing (GPSR) [7] protocols are selected respectively
The remainder of this abstract is organized as follows. In section II, we present briefly the disaster scenario considered. In section III, we explain the results of the simulations. Finally, in Section IV, we conclude and discuss perspectives.
II. Landmark disaster scenario
we are investigating a disaster scenario (fire triggering) in the “Landmark” shopping mall. The mobility model used is generated by the BonnMotion tool. We assume that the incident is caused by fire into two different locations. Rescuers are immediately called to intervene. We consider that firefighters are divided into 3 groups of vehicles with 26 firefighters in each group. Medical emergency teams that probably could reach the mall just after the incident, are consisting of 6 ambulances with 5 medical staff in each ambulance (30 rescuers in total).
Civilians could also help rescuers and they are also considered in the mobility trace generation. Injured individuals are transported from the incident areas to the patients waiting for treatment areas to get first aids. Then, they will be transported to the patients clearing areas where they will be either put under observation or evacuated to hospitals by ambulance or helicopter. A tactical command center conducting the operations is represented by WSNET is an event-driven simulator for wireless networks. WSNET is able to design, configure and simulate a whole communication stack from the Physical to the Application Layer. We benefit from these features to vary the payload with the selected MAC and routing layer in each iteration. These combinations provided a deep review on the possible communication architecture to consider in disaster operations. The following section describes the outcome of these extensive simulations.
III. Performance evaluation
The main difference between the disaster and any other scenario is the emergency aspect. All flowing data in the network is considered highly important. The probability of packets that did not reach the destination must be zero. For this reason, our evaluation is regarded to the Packet Reception rate (PRR). Likewise, a delayed packet is such like an unreceived packet. That's why we consider the delay as decisive factor. Similarly, the energy consumption is also observed.
The following table summarizes the obtained results.
In terms of average PRR, WiFi IEEE 802.11 is convincingly better than the two counterparts combined with all the routing protocols. GPSR has a considerable PRR with WBAN IEEE 802.15.6, but the location information of the nodes is considered as known. Regarding the delay, DD is particularly efficient with WiFi and WBAN.
To conclude, DD is an efficient routing protocol to consider in case of Indoor operations, while GPSR is most relevant in Outdoor operations where locations can be obtained from GPS.
IV. Conclusion
Disasters are growing remarkably worldwide. The existing communication infrastructures are not considered a part of the communication recuing system. Consequently, to monitor deployed individuals (rescue teams, injured individuals, etc.) data should be forwarded throughout these individuals (WBANs) to reach a distant command center. In order to evaluate the performance of diverse multi-hop routing protocols, we have conducted extensive simulations on WNSET using a realistic mobility model. The simulations showed that all evaluated routing protocols (i.e.AODVv2, OLSRv2, DD and GPSR) has the best PRR with the WiFi technology. While, DD was found to be the most efficient with the WBAN technology. GPSR also could be considered when the location information is available.
Acknowledgment
The work was supported by NPRP grant #[6-1508-2-616] from the Qatar National Research Fund which is a member of Qatar Foundation. The statements made herein are solely the responsibility of the authors.
References
[1] M. M. Alam, D. B. Arbia, and E. B. Hamida, “Device-to-Device Communication in Wearable Wireless Networks,” 10th CROWNCOM Conf., Apr-2015.
[2] E. B. Hamida, M. M. Alam, M. Maman, and B. Denis, “Short-Term Link Quality Estimation for Opportunistic and Mobility Aware Routing in Wearable Body Sensors Networks,” WIMOB 2014 2014 IEEE 10th Int. Conf. Wirel. Mob. Comput. Netw. Commun. WiMob, pp. 519–526, Oct-2014.
[3] N. Aschenbruck, “BonnMotion: A Mobility Scenario Generation and Analysis Tool.” University of Osnabruuck, Jul-2013.
[4] T. Clausen, C. Dearlove, P. Jacquet, and U. Herberg, “RFC7181: The Optimized Link State Routing Protocol Version 2” Apr-2014.
[5] C. Perkins, S. Ratliff, and J. Dowdell, “Dynamic MANET On-demand (AODVv2) Routing draft-ietf-manet-dymo-26.” Feb-2013.
[6] C. Intanagonwiwat, R. Govindan, and D. Estrin, “Directed diffusion: a scalable and robust communication paradigm for sensor networks,” pp. 56–67, 2000.
[7] B. Karp and H. T. Kung, “GPSR: Greedy Perimeter Stateless Routing for Wireless Networks,” Annu. ACMIEEE Int. Conf. Mob. Comput. Netw. MobiCom 2000, no. 6, 2000.
-
-
-
Robotic Assistants in Operating Rooms in Qatar, Development phase
Authors: Carlos A. Velasquez, Amer Chaikhouni and Juan P. WachsObjectives
To date, no automated solution can anticipate or detect a request from a surgeon during surgical procedures without requiring the surgeon to alter his/her behavior. We are addressing this gap by developing a system that can pass the correct surgical instruments as required by the main surgeon. The study uses a manipulator robot that automatically detects and analyzes explicit and implicit requests during surgery, emulating a human nurse handling surgical equipment. This project constitutes an important step in a research project that involves other challenges related to operative efficiency and safety.
At the 2016 QF Annual Research Forum Conference, we would like to present our preliminary results in the execution of the project. First, a description of the methodology used to acquire surgical team interactions during several cardiothoracic procedures observed at the HMC Heart Hospital, followed by the analysis of the data acquired. Secondly, experimental results of actual human-robot interaction tests emulating the human nurse behavior.
Methods
In order to study the interactions at the operating room during surgical procedures, a model of analysis was structured and executed for several cardiothoracic operations captured with MS Kinect V2 sensors. The data obtained was meticulously studied and relevant observations stored in a database to facilitate the analysis and comparison of events representing the different interactions among the surgical team.
Surgical Annotations
Two or three consecutive events identify in time a sequence of manipulation. For the purpose of developing a structure of annotations, each record in the database can be divided on information containing the time of occurrence of the event counted from the beginning of the procedure, information describing how the manipulation event occurs, information related to the position of the instrument in the space around the patient, and a final optional component with brief additional information that might help to understand the event considered and its relations to the surgical operations flow.
Figure 1: Operating room at HMC Heart Hospital. (a) Kinetic Sensor location (b) Surgical team and instrument locations for a cardio thoracic procedure as viewed from the sensor
1.1. Information containing the time of occurrence of the sequence
Timing information of sequences is basically described by time stamps corresponding to the occurrence of its initial and final events. Some special sequences might include an additional intermediate event that we call as ‘Ongoing’. Additionally, all events are counted as they occur. The status of this counting process is also included as a field in the time occurrence group.
1.2. Information describing how the manipulation sequence occurs
A careful observation of the routines performed at the surgical room allowed us to identify different sequences of events that can be classified into three general categories that describe how the manipulation event occurs:
Commands that correspond to the requests of instruments or operations addressed to supporting staff. These requests can be discriminated as verbal, non-verbal or a combination of both. Commands are not exclusively made by surgeons, sometimes the nurse handling instruments requests actions from the circulatory staff too.
Predictions made by the supporting staff when selecting and preparing instruments in advance to handle them to the surgeon, Fig. 3. These predictions can be divided into right, or wrong depending on the surgeon's decision to accept or reject the instrument when it is offered to him. Sometimes an instrument whose use was predicted incorrectly at a given time might be required by the surgeon in a near following sequence. We classified this kind of event as a partially wrong prediction, Fig. 4.
Actions that correspond to independent or coupled sequences necessary for the flow of the surgical procedure. For instance, as illustrated in the start and end events of Fig. 2, the surgeon picks up himself an instrument from the Mayo tray. The detailed observation of all relevant actions is essential to understand how commands are delivered, what intermediate events are triggered in response, and how the instruments are handled in space and time between its original and final location.
The information presented in the Table 1 summarizes the most common sequences of events found during the surgical procedures analyzed. The table also shows how the roles of the surgical team are distributed in relation to the events considered.
1.3. Information related to the instrument and its position in the space around the patient
The instrument is identified simply by its name. Several instances of the same instrument are used during surgery, but for annotation purposes we refer to all of them as if only one were available. In cases where some physical characteristic differentiate the instrument from others of the same kind such as in the case of size, a different instrument name is selectable. In Table 2, for example, a ‘Retractor’ is differentiated from ‘Big Retractor’. An instrument can be located at any of the possibilities listed under the label ‘Area’ in Table 2 as it can be verified from Fig. 1. In case, one of the members of the surgical team holds the instrument in one or both of his hands, the specification of the exact situation can be obtained by selecting any of the options under de label ‘Hands’ in Table 2.
1.4. Additional information
Any remarkable information related to the event can be included in this unlimited field. For example, at some point the nurse can offer two instruments simultaneously to the surgeon. This is a rare situation since usually the exchange is performed instrument by instrument.
Figure 2: Example of an action: The surgeon picks up directly an instrument
Figure 3: The nurse anticipates the use of one instrument
Figure 4: A Partially wrong prediction: One of two instruments is accepted
Table 1 Description of events and relations to the roles of surgical staff
Table 2 Information about location of the instrument
2.Annotation Software Tool
Based on libraries and information provided by Microsoft we wrote code in order to use MS Kinect Studio to annotate the surgical procedures. The use of Kinect Studio has several advantages if compared to other tools we evaluated, such as extreme precision to identify the length and timing of a sequence, and efficiency in the analysis of simultaneous streams of information. Figure 3 shows the screen presented by Kinect Studio when annotations are being made for the color stream of the surgical recording used as example in the same illustration. The color stream is rendered at a speed of 30fps, which means that every 0.03S in average there is a frame available to annotate if necessary. The blue markers on the Timeline are located at events of interest. On the left side of the screen, a set of fields that correspond to the information of interest is displayed to be filled for each event of interest.
Figure 5: Annotations in Kinect Studio are introduced as metadata fields for the Timeline Marker
The collection of the different entries to describe the interaction are written as an output text file that can be processed with conventional database software tools. The structure of the records in the resultant text file, is presented in Fig. 6. The set of annotations obtained within MS Kinect Studio is exported as a text table that follows the model illustrated in Fig. 6. The structure of the record presented contains the events and relations to the roles of surgical staff listed in Table 1 as well as the fields of information for the instrument as presented in Table 2.
Figure 6: Structure of the annotation record obtained from the metadata associated to the timeline markers in Kinect Studio
Figure 7: Annotations Database as processed within Excel for a surgery of 30 minutes
The annotations database obtained within Kinect Studio for the surgical procedure1 used as example in this report was exported to MS Excel for analysis. A partial image of this database is presented in Fig. 7 where it is possible to appreciate some of the first sequences stored. The colors in the third column are used to differentiate events that belong to the same sequence. These colors are chosen arbitrarily. After the final event of the sequence is identified, the same color is available to signal a new sequence. In total the database of this example contains 259 records for a period of 30 minutes. Queries performed by using Database functionalities generate the results for predictions and commands illustrated in Fig. 8 and Fig. 9.
Figure 8 Predictions: (a) Discrimination as right, wrong or partially wrong. (b) Instruments received by the surgeon (c) Nurse hand holding instruments (d) Instruments rejected by the surgeon (e) Time elapsed while the prediction is performed.
Figure 9 Commands: (a) Discrimination as verbal, nonverbal or combination of both verbal and nonverbal (b) Instruments requested (c) Time elapsed in verbal commands (d) Time elapsed in nonverbal commands (e) Time elapsed while the instrument is requested
Experimental Setup
As a preliminary step to operate a manipulator robot as robotic nurse, surgical personnel at the HMC Heart Hospital are requested to perform a mock knot as illustrated in the Fig. 10 on a synthetic model. During the performance of this task, a Kinect sensor captures body position, hand gestures as well as voice commands. This information is processed by a workstation running windows compatible software that controls the robot to react passing the requested surgical instrument to the subject so that it can be used to complete the task.
Figure 10: (a) Mock knot used as preliminary test of interaction (b) Robotic Set up for experiments at the HMC Heart Hospital
The robot used is a Barrett Robotic manipulator with seven degrees of freedom with as shown in Fig. 10. This FDA approved robot is one of the most advanced robotic systems known as safe to operate with human subjects since it has force sensing capabilities that are used to avoid potential impacts.
Summary
As part of development of a NPRP project that studies the feasibility of having robotic nurses at the operating room that can recognize verbal and nonverbal commands to deliver instruments from the tray to the hand of the surgeon, we have studied interaction activities of surgical teams performing cardio thoracic procedures at the Heart Hospital in Doha. Using state of the art sensor devices we achieved to capture plenty of information that has been carefully analyzed and annotated into databases. We would like to present at the 2016 QF Annual Research Forum Conference our current findings as well as the results of Human interaction tests with a manipulator robot acting as a robotic nurse in the execution of a task that involves gesture/verbal recognition, recognition of the instrument and safe delivery to the surgeon.
1 Wedge Lung Resection. In this procedure the surgeon removes a small wedge-shaped piece of lung that contains cancer and a margin of healthy tissue around the cancer.
-
-
-
Design and Performance Analysis of VLC-based Transceiver
Authors: Amine Bermak and Muhammad Asim AttaBackground
As the number of handheld devices increases, wireless data traffic is expanding exponentially. With the ever increasing demand of higher data rate, it will be very challenging for the system designers to meet the requirement using limited Radio Frequency (RF) communication spectrum. One of the possible remedies to overcome this problem is the use of freely available visible light spectrum [1].
Introduction
This paper proposes an indoor communication system by utilizing Visible Light Communication (VLC). VLC technology utilizes visible light spectrum (750–380 nm) not only for illumination but with an additive advantage of data communication [2]. Visible Light Communication exploits high frequency switching capabilities of Light Emitting Diodes (LEDs) to transmit data. A receiver generally containing a Photo Diode receives signals from optical source and can easily decode the information being transmitted. In practical systems, usage of CMOS imager as a receiver containing an array of Photo Diodes is preferred over a single Photo Diode. Such receiver will enable multi-target detection and multi-channel communication resulting in more robust transceiver architecture [3].
Method
This work demonstrates a real-time transceiver implementation for Visible Light Communication on FPGA. A Pseudo Noise (PN) sequence is generated that will act as input data for the transmitter. Direct Digital Synthesizer (DSS) is implemented for the generation of carrier signal for modulation purpose [4]. Transmitter utilizes On-Off-Keying (OOK) for modulation of incoming data due to its simplicity [5]. The modulated signal is then converted into analog form using Digital-to-Analog (DAC) converter. An analog driver circuit is connected with digital transmitter which is capable of driving an array of Light Emitting Diodes (LEDs) for data transmission. Block level architecture of VLC Transmitter is shown in Fig. 1.
The receiver architecture uses analog circuitry including Photo Diodes for optical detection and Operational Amplifiers for amplification of received signal. Analog-to-Digital (ADC) conversion is performed before transmitting data back to FPGA for demodulation and data reconstruction. Figure 2 demonstrates the architecture of VLC Receiver.
Results and Conclusion
The system is implemented and tested using Xilinx Spartan 3A series FPGA [6]. Basic transceiver implementation utilizes data rate of 1Mbps with a carrier frequency of 5 MHz. However, in VLC, data rate and carrier frequency directly affects the optical characteristics including color and intensity of LEDs. Therefore, different data rates and modulation frequencies are evaluated for optimum data transmission with minimal effects on optical characteristics of LEDs. System complexity in terms of hardware resources and performance analysis including Bit Error Rate (BER) under varying conditions is also compared.
Results demonstrate that it is feasible to establish a low data rate communication link for indoor applications ranging up to 10 m using commercially available LEDs. Integrating a CMOS imager at receiver end will enable a VLC based Multiple-Input-Multiple-Output (MIMO) communication link that can serve multiple channels, maximizing to 1 channel per pixel [3]. Higher data rates are also achievable by utilizing high data rate modulation techniques (OFDM) at the expense of computational complexity and hardware resource utilization [7].
One of the possible implications of this work could be the implementation of VLC based Indoor positioning and navigation system. It can be a potential benefit for large constructions involved in public interactions including but not limited to hospitals, customer support centers, public facilitation offices, shopping malls and libraries. The system will largely utilize existing infrastructure of indoor illumination with added advantage of data communication.
The study also proposes extension of this work for utilization of VLC in outdoor applications. However, more robust algorithms are required for outdoor communication due to the presence of optical noise and interference caused by weather and atmospheric conditions. Robustness of existing algorithm can be increased by integrating Direct Sequence Spread Spectrum (DSSS) together with OOK for modulation. However, further research work is required to evaluate the performance, complexity and robustness of this system under realistic conditions.
References
[1]Cisco Visual Networking Index, “Global Mobile Data Traffic Forecast Update, 2012–2017,” CISCO, White Paper, Feb. 2013.
[2]Terra, D. Inst. de Telecomun., Univ. de Aveiro, Aveiro, Portugal, Kumar, N. Lourenco, N. Alves, L.N.,” Design, development and performance analysis of DSSS-based transceiver for VLC”, EUROCON - International Conference on Computer as a Tool (EUROCON), 2011 IEEE.
[3]“Image Sensor Communication”. VLCC Consortium.
[4]Xilinx DDS Compiler IP Core “http://www.xilinx.com/products/intellectual property/
dds_compiler.html#documentation”
[5]Nuno Lourenço, Domingos Terra, Navin Kumar, Luis Nero Alves, Rui L Aguiar, “Visible Light Communication System for Outdoor Applications”, 8th IEEE, IET International Symposium on Communication Systems, Networks and Digital Signal Processing
[6]Xilinx Spartan-3A Starter Kit “http://www.xilinx.com/products/boards-and-kits/
hw-spar3a-sk-uni-g.html”
[7]Liane Grobe, Anagnostis Paraskevopoulos, Jonas Hilt, Dominic Schulz, Friedrich Lassak, Florian Hartlieb, Christoph Kottke, Volker Jungnickel, and Klaus-Dieter Langer, “High Speed Visible Light Communication Systems”, IEEE Communications Magazine, December 2013.
-
-
-
A Robust Unified Framework of Vehicle Detection and Tracking for Driving Assistance System with High Efficiency
Authors: Amine Bermak and Bo ZhangBackground
Research by Qatar Road Safety Studies Center (QRSCC) found that the total number of traffic accidents in Qatar was 290,829 in 2013, with huge economical cost that amounts to 2.7 percent of the country's gross domestic product (GDP). There is a growing research effort to improve road safety and to develop automobile driving assistance systems or even self-driving systems like Google project, which is widely expected to revolutionize automotive industry. Vision sensors will play a prominent role in such applications because they provide intuitive and rich information about the road condition. However, vehicle detection and tracking based on vision information is a challenging task because of the large variability of appearance of vehicles, interference of strong light and sometimes fierce weather condition, and complex interactions amongst drivers.
Objective
While previous work usually regards vehicle detection and tracking as separate tasks [1, 2], we propose a unified framework for both tasks. In the detection phase, recent work has mainly focused on building detection systems based on robust feature sets such as histograms of oriented gradients (HOG) [3] and Harr-like features [4] rather than just simple features such as symmetry or edges. However, these robust features involve heavy computational requirements. In this work, we propose an algorithmic framework designed to target both high efficiency and robustness while keeping the computational requirement at an acceptable level.
Method
In the detection phase, in order to reduce the processing latency, we propose to use a hardware-friendly corner detection method obtained from accelerated segment test feature (FAST) [5], which determine interest corners by simply comparing each pixel with its 9 neighbors. If there are contiguous neighboring pixels that are all brighter or darker than a center pixel, it is marked as a corner point. Fig.1 shows the result of FAST corner detector on a real road image. We use recent Learned Arrangements of Three Patch Codes (LATCH) [6] as corner point descriptor. The descriptor is falls into binary descriptor category, but still maintains high performance comparable to histogram based descriptors (like HOG). The descriptors created by LATCH are binary strings, which are computed by comparing image patch-triplets rather than image pixels and as a result, they are less sensitive to noises and minor changes in local appearances. In order to detect vehicles, corners in the successive images are matched to those in the previous images, and thus optical flow at each corner point can be derived according to the movement of corner points. Because of the fact that approaching vehicles in opposite direction will produce a diverging flow, vehicles can be detected from the flow due to ego-motion. Fig.2 illustrates the flow estimated from corner point matching. Sparse optical flow proposed here is quite robust because of the LATCH characteristics, and it also requires much lower computational resources compared to traditional optical flow methods that need to solve time-consuming optimization problem.
Once vehicles are detected, the tracking phase is achieved by matching the corner points. Using Kalman filter for prediction, the matching is fast because probable matched corner point will only be searched near the predicted location. Using corner points for computing sparse optical flow enables the vehicle detection and tracking to be carried-out simultaneously using this unified framework (Fig.3). In addition, this framework allows us to detect newly entered cars in the scene during tracking. Since most image sensors today are based on a rolling shutter integration approach, the image information can be transmitted to the FPGA-based hardware serially and hence the FAST detector and LATCH descriptor could work in a pipeline manner for achieving efficient computation.
Conclusion
In this work, we propose a framework of detecting and tracking vehicles for driving assistance application. The descriptors created by LATCH are binary strings, which are computed by comparing image patch-triplets rather than image pixels and as a result, they are less sensitive to noises and minor changes in local appearances. The vehicles are detected from sparse flow estimated from corner point matching and vehicle tracking is done also with corner point matching with the assistance of Kalman filter. The proposed framework is robust, efficient and requires much lower computational requirements making it a very viable solution for embedded vehicle detection and tracking systems.
References
[1] S. Sivaraman and M. M. Trivedi, “Looking at vehicles on the road: A survey of vision-based vehicle detection, tracking, and behavior analysis,” IEEE Trans. Intell. Transp. Syst., vol. 14, no. 4,
pp. 1773–1795, 2013.
[2] Z. Sun, G. Bebis, and R. Miller, “On-Road Vehicle Detection: A Review,” vol. 28, no. 5,
pp. 694–711, 2006.
[3] Z. Sun, G. Bebis, and R. Miller, “Monocular precrash vehicle detection: Features and classifiers,” IEEE Trans. Image Process., vol. 15, no. 7, pp. 2019–2034, 2006.
[4] W. C. Chang and C. W. Cho, “Online boosting for vehicle detection,” IEEE Trans. Syst. Man, Cybern. Part B Cybern., vol. 40, no. 3, pp. 892–902, 2010.
[5] E. Rosten and T. Drummond, “Fusing points and lines for high performance tracking,” Tenth IEEE Int. Conf. Comput. Vis. Vol. 1, vol. 2, pp. 1508–1515 Vol. 2, 2005.
[6] G. Levi and T. Hassner, “LATCH: Learned Arrangements of Three Patch Codes,” arXiv, 2015.
-
-
-
On Arabic Multi-Genre Corpus Diacritization
Authors: Houda Bouamor, Wajdi Zaghouani, Mona Diab, Ossama Obeid, Kemal Oflazer, Mahmoud Ghoneim and Abdelati HawwariOne of the characteristics of writing in Modern Standard Arabic (MSA) is that the commonly used orthography is mostly consonantal and does not provide full vocalization of the text. It sometimes includes optional diacritical marks (henceforth, diacritics or vowels).
Arabic script consists of two classes of symbols: letters and diacritics. Letters comprise long vowels such as A, y, w as well as consonants. Diacritics on the other hand comprise short vowels, gemination markers, nunation markers, as well as other markers (such as hamza, the glottal stop which appears in conjunction with a small number of letters, dots on letters, elongation and emphatic markers) which in all, if present, render a more or less exact precise reading of a word. In this study, we are mostly addressing three types of diacritical marks: short vowels, nunation, and shadda (gemination).
Diacritics are extremely useful for text readability and understanding. Their absence in Arabic text adds another layer of lexical and morphological ambiguity. Naturally occurring Arabic text has some percentage of these diacritics present depending on genre and domain. For instance, religious text such as the Quran is fully diacritized to minimize chances of reciting it incorrectly. So are children's educational texts. Classical poetry tends to be diacritized as well. However, news text and other genre are sparsely diacritized (e.g., around 1.5% of tokens in the United Nations Arabic corpus bear at least one diacritic (Diab et al., 2007)).
In general, building models to assign diacritics to each letter in a word requires a large amount of annotated training corpora covering different topics and domains to overcome the sparseness problem. The currently available diacritized MSA corpora are generally limited to the newswire genres (those distributed by the LDC) or religion related texts such as Quran or the Tashkeela corpus. In this paper we present a pilot study where we annotate a sample of non-diacritized text extracted from five different text genres. We explore different annotation strategies where we present the data to the annotator in three modes: basic (only forms with no diacritics), intermediate (basic forms–POS tags), and advanced (a list of forms that is automatically diacritized). We show the impact of the annotation strategy on the annotation quality.
It has been noted in the literature that complete diacritization is not necessary for readability Hermena et al. (2015) as well as for NLP applications, in fact, (Diab et al., 2007) show that full diacritization has a detrimental effect on SMT. Hence, we are interested in discovering the optimal level of diacritization. Accordingly, we explore different levels of diacritization. In this work, we limit our study to two diacritization schemes: FULL and MIN. For FULL, all diacritics are explicitly specified for every word. For MIN, we explore what is a minimum and optimal number of diacritics that needs to be added in order to disambiguate a given word in context and make a sentence easily readable and unambiguous for any NLP application.
We conducted several experiments on a set of sentences that we extracted from five corpora covering different genres. We selected three corpora from the currently available Arabic Treebanks from the Linguistic Data Consortium (LDC). These corpora were chosen because they are fully diacritized and had undergone significant quality control, which will allow us to evaluate the anno tation accuracy as well as our annotators understanding of the task. We select a total of 16,770 words from these corpora for annotation. Three native Arabic annotators with good linguistic background annotated the corpora samples. Diab et al. (2007), define six different diacritization schemes that are inspired by the observation of the relevant naturally occurring diacritics in different texts. We adopt the FULL diacritization scheme, in which all the diacritics should be specified in a word. Annotators were asked to fully diacritize each word.
The text genres were annotated following the different strategies:
- Basic: In this mode, we ask for annotation of words where all diacritics are absent, including the naturally occurring ones. The words are presented in a raw tokenized format to the annotators in context.
- Intermediate: In this mode, we provide the annotator with words along with their POS information. The intuition behind adding POS is to help the annotator disambiguate a word by narrowing down on the diacritization possibilities.
- Advanced: In this mode, the annotation task is formulated as a selection task instead of an editng task. Annotators are provided with a list of automatically diacritized candidates and are asked to choose the correct one, if it appears in the list. Otherwise, if they are not satisfied with the given candidates, they can manually edit the word and add the correct diacritics. This technique is designed in order to reduce annotation time and especially reduce annotator workload. For each word, we generate a list of vowelized candidates using MADAMIRA (Pasha et al., 2014). MADAMIRA is able to achieve a lemmatization accuracy 99.2% and a diacritization accuracy of 86.3%. We present the annotator with the top three candidates suggested by MADAMIRA, when possible. Otherwise, only the available candidates are provided.
We also provided annotators with detailed guidelines, describing our diacritization scheme and specifying how to add diacritics for each annotation strategy. We described the annotation procedure and specified how to deal with borderline cases. We also provided in the guidelines many annotated examples to illustrate the various rules and exceptions.
In order to determine the most optimized annotation setup for the annotators, in terms of speed and efficiency, we test the results obtained following the three annotation strategies. These annotations are all conducted for the FULL scheme. We first calculated the number of words annotated per hour, for each annotator and in each mode. As expected, following the Advanced mode, our three annotators could annotate an average of 618.93 words per hour which is double those annotated in the Basic mode (only 302.14 words). Adding POS tags to the Basic forms, as in the Intermediate mode, does not accelerate the process much. Only − 90 more words are diacritized per hour compared to the basic mode.
Then, we evaluated the Inter-Annotator Agreement (IAA) to quantify the extent to which independent annotators agree on the diacritics chosen for each word. For every text genre, two annotators were asked to annotate independently a sample of 100 words.
We measured the IAA between two annotators by averaging WER (Word Error Rate) over all pairs of words. The higher the WER between two annotations, the lower their agreement. The results obtained show clearly that the Advanced mode is the best strategy to adopt for this diacritization task. It is the less confusing method on all text genres (with WER between 1.56 and 5.58).
We also conducted a preliminary study for a minimum diacritization scheme. This is a diacritization scheme that encodes the most relevant differentiating diacritics to reduce confusability among words that look the same (homographs) when undiacritized but have different readings. Our hypothesis in MIN is that there is an optimal level of diacritization to render a text unambiguous for processing and enhance its readability. We showed the difficulty in defining such a scheme and how subjective this task can be.
Acknowledgement
This publication was made possible by grant NPRP-6-1020-1-199 from the Qatar National Research Fund (a member of the Qatar Foundation).
-
-
-
QUTor: QUIC-based Transport Architecture for Anonymous Communication Overlay Networks
Authors: Raik Aissaoui, Ochirkhand Erdene-Ochir, Mashael Al-Sabah and Aiman ErbadIn this new century, the growth of Information and Communication Technology (ICT) has a significant influence on our life. The wide spread of internet created an information society where the creation, distribution, use, integration and manipulation of information is a significant economic, political, and cultural activity. However, it has also brought its own set of challenges. Internet users have become increasingly vulnerable to online threats like botnets, Denial of Service (DoS) attacks and phishing spam mail. Stolen users’ information can be exploited by many third party entities. Some Internet Service Provider (ISP) sell this data to advertising companies which analyse it and build its marketing strategy to influence the customer choices by breaking their privacy. Oppressive governments exploit revealed users private data to harass members of the opposition parties, activist from civil society and journalists. Anonymity networks has been introduced in order to allow people to conceal their identity online. This is done by providing unlinkability between the user IP address, his digital fingerprint, and his online activities. Tor is the most widely used anonymity network today, serving millions of users on a daily basis using a growing number of volunteer-run routers [1]. Clients send their data to their destinations through a number of volunteer-operated proxies, known as Onion Routers (ORs). If a user wants to use the network to protect his online privacy, the user installs the Onion Proxy (OP), which bootstraps by contacting centralized servers, known as authoritative directories, to download the needed information about ORs serving in the network. Then, the OP builds overlay paths, known as circuits, which consist of three ORs-entry guard, middle and exit-where only the entry guard knows the user, and only the exit knows the destination. Tor helps internet users to hide their identities, however it introduces large and highly variable delays experienced in response and download times during web surfing activities that can be inconvenient for users. Traffic congestion adds further delays and variability to the performance of the network. Besides, an end-to-end flow control approach which does not react to congestion in the network.
To improve Tor performance, we propose to integrate QUIC for Tor. QUIC [2] (Quick UDP Internet Connections) is a new multiplexed and secure transport atop UDP, developed by Google. QUIC is implemented over UDP to solves a number of transport-layer and application-layer problems experienced by modern web applications. It reduces connection establishment latency. QUIC handshakes frequently require zero roundtrips before sending payload. It improves congestion control and multiplexes without head-of-line blocking. QUIC is designed for multiplexed streams, lost packets carrying data for an individual stream generally only impact that specific stream. In order to recover from lost packets without waiting for a retransmission, QUIC can complement a group of packets with an Forward Error Correction (FEC) packet. QUIC connections are identified by a 64-bit connection identification (ID). When a QUIC client changes Internet Protocol (IP) addresses, it can continue to use the old connection ID from the new IP address without interrupting any in-flight requests. QUIC provides multiplexing and flow control equivalent to HTTP/2, security equivalent to TLS, and connection semantics, reliability, and congestion control equivalent to TCP. QUIC shows a good performance against HTTP/1.1 [3]. We are expecting good results to improve the performance of Tor since QUIC is one of the most promising solutions to decrease latency [4]. A QUIC Stream is a bi-directional flow of bytes across a logical channel within a QUIC connection. This later is a conversation between two QUIC endpoints with a single encryption context that multiplexes streams within it. QUIC multiplestream architectures improves Tor performance and solves head-of-line problem. In first step, we implemented QUIC in OR nodes to be easily upgraded to the new architecture without modifying end user OP. Integrating QUIC will not degrade Tor security as it provides a security equivalent to TLS (QUIC Crypto) and soon it will use TLS 1.3.
-
-
-
Cognitive Dashboard for Teachers Professional Development
Authors: Riadh Besbes and Seifeddine BesbesIntroduction
This research aims to enhance the culture of data in education which is in the middle of a major transformation by technology and Big Data Analytics. The core purpose of schools is providing an excellent education to every learner; data can be the leverage of that mission. Big data analytics is the process of examining large sets containing a variety of data types to uncover hidden patterns, unknown correlations, market trends, customer preferences and other useful business information. Valuable lessons can be learnt from other industries when considered in terms of their practicality for public education. Hence, Big Data Analytics, also known as Education Data Mining and Learning Analytics, develop capacity for quantitative research in response to the growing need for evidence-based analysis related to education policy and practice. However, education has been slow to follow the Data Analytics evolution due to difficulties surrounding what data to collect, how to collect those data and what they might mean. Our research works identify, quantify, and measure the qualitative teaching practices, the learning performances, and track the learners' academic progress. Teaching and learning databases are accumulated from quantitative “measures” done through indoors classroom visits within academic institutions, online web access learners' questionnaires answers, paper written statements' analysis of academic exams in mathematics, science, and literacy disciplines, and online elementary grades seizure from written traces of learners' performances within mathematics, science, and literacy exams. The project's data mining strategy will support and develop teachers' expertise, enhance and scaffold students' learning, improve and raise education system's performance. The supervisor expertise will mentor the researcher for information and educational knowledge extraction from collected data. As consequence, the researcher will acquire the wisdom of knowledge use to translate it into more effective training sessions on educational concrete policies.
State-of-the-art
Anne Jorro says: “evaluate is necessarily considering how we will support, advise, exchange, to give recognition to encourage the involvement of the actor giving him the means to act”. PISA report states that many of the world's best-performing education systems have moved from bureaucratic “command and control” environments towards school systems in which the people at the frontline have much more control. Making teaching and learning data available leads to information then knowledge extraction. As advised by PISA report, the effective use of extracted knowledge drives decision making to wisdom. Linda Darling-Hammond and Charles E. Ducommun underscore the important assumption that, undoubtedly, teachers are the fulcrum that has the biggest impact and makes any school initiative leads toward success or failure. Rivkin and al ensure that a teacher's classroom instructional practice is perhaps one of the most important yet least understood factors contributing to teacher effectiveness. As consequence, many classroom observation tools are designed to demystify effective teaching practices. The Classroom Assessment Scoring System is a well-respected classroom climate observational system. The CLASS examines three domains of behaviour including, firstly, emotional support (positive classroom climate, teacher sensitivity, and regard for student perspectives). Secondly, it includes classroom organization (effective behaviour management, productivity, and instructional learning formats). Thirdly, it contains instructional supports (concept development, quality of feedback, and language modelling). The Framework for Teaching Method for Evaluation Classroom Observation is lastly releasing as 2013 edition. It divides the complex activity of teaching into 22 components clustered into four domains of teaching responsibility. This last tool's edition was conceived to respond to the instructional implications of the American Common Core State Standards. Those standards envision, for literacy and mathematics initially, deep engagement by students with important concepts, skills, and perspectives. They emphasize active, rather than passive, learning by students. In all areas, they place a premium on deep conceptual understanding, thinking and reasoning, and the skill of argumentation. Heather Hill from Harvard University, and Deborah Loewenberg Ball from university of Michigan, had developed “Mathematical Quality of Instruction (MQI)”. Irving Hamer is an education consultant and a former deputy superintendent for academics, technology, and innovation for school system.
Objectives
Our project's wider objective is to improve teaching and learning effectiveness within K12 classes by exploiting data mining methods for educational knowledge extraction. The researcher realizes three daily visits to mathematics, science, and literacy courses. Using his interactive educational grid, an average of 250 numerical data were stored as quantified teaching and learning practices for one classroom visit for every teacher. At the same time through, and in parallel with on-field activities, distance interactivity via website is processed. At the beginning and for once, each learner from planned classes to be visited fills an individual questionnaire form for learning style identification. He seizes, on another website form, every elementary grade on each question from his maths, science and literacy exams' answer sheets. Those exams statements were previously analysed and saved by the researcher on website. Averages of 150 numerical data were stored as quantified learning performances for every learner. Meetings at partner University for data analytics and educational knowledge extraction were done followed by meetings at inspectorate headquarters for in-depth data. Then, in partner schools, training sessions were the theatres of constructive reflections and feedbacks on major findings about teaching and learning effectiveness. Those actions were reiterated for months. Each year, averagely the performance of 1000 students and the educational practices of 120 teachers will be specifically and tracked. Within summer's months, workshops, seminars, and an international conference will be organised for stakeholders from educational fields. Thus, among project's actions three specific objectives shall be achieved. First, sufficient data on students' profiles and performances related to educational weaknesses and strengths will be provided. Second, teachers' practices inside classrooms at each partner school will be statistically recorded. Third, a complete data mining centre for educational research will be conceived and cognitively interpreted by researchers' teams then findings are exposed for teachers' reflexive thoughts, and discussions within meetings and training sessions
Research methodology and approach
-
-
-
Dynamic Scheduled Access Medium Access Control for Emerging Wearable Applications
Authors: Muhammad Mahtab Alam, Dhafer Ben-Arbia and Elyes Ben-HamidaContext and Motivation
Wearable technology is emerging as one of the key enablers for the internet of everything (IoE). The technology is getting mature by every day with more applications than ever before consequently making a significant impact in consumer electronic industry. In recent years, with the continuous exponential rise, it is anticipated that by 2019 there will be more than 150 million wearable devices worldwide [1]. Whilst fitness and health-care remain the dominant wearable applications, other applications include fashion and entertainment, augmented reality, rescue and emergency management are emerging as well [2]. In this context, Wireless Body Area Networks (WBAN) is implicit and well-known research discipline which foster and contribute towards the rapid growth of wearable technology. IEEE 802.15.6 standard targeted for WBAN provides a great flexibility and provisions both at the physical (PHY) and medium access control (MAC) layers [3].
The wearable devices are constraint by limited battery, miniaturized, low processing and storage capabilities. While energy efficiency remains one of the most important challenges, low duty cycle and dynamic MAC layer design is critical for the longer life of these devices. In this regard, scheduled access mechanism is considered as one of the effective MAC approaches in WBAN in which every sensor node can have a dedicated time slot to transfer its data to the BAN coordinator. However, for a given application, as every node (i.e., connected sensors) has different data transmission rates [4], therefore, the scheduled access mechanism has to adapt the slot allocation accordingly to meet the design constraints (i.e., energy efficiency, packet delivery and delay requirements).
Problem Description
The scheduled access MAC with 2.4 GHz of operating frequency, highest data rate (i.e., 971 Kbps), and highest payload (i.e., 256 bytes) provides the maximum throughput in IEEE 802.15.6 standard. However, the performance of both packet delivery ratio (PDR) and delay in this configuration is very poor starting from -10dBm and lower transmission power [5]. The presented study is focused on this particular PHY-MAC configuration and to understand what is the maximum realistic achievable throughput while operating at the lowest transmission power for future IEEE 802.15.6 compliant transceivers. In addition the objective is to enhance the performance under realistic mobility patterns i.e., space and time varying channel conditions.
Contribution
In this paper we address the reliability concern of the above mentioned wearable applications while using IEEE 802.15.6 (high data rate supported) PHY-MAC configuration. The objective is to enhance the system performance while exploiting m-periodic scheduled access mechanism. We proposed a throughput and channel aware dynamic scheduling algorithm which provides a realistic throughput under dynamic mobility and space and time varying links. First, various mobility patterns are generated with special emphasis on space and time varying links because their performance is most vulnerable under the dynamic environment. A deterministic pathloss values (as an estimate of the channel) are obtained from a motion capture system and bio-mechanical modeling. Consequently, signal to noise (SNR), bit error rate (BER) and packet error rate (PER) are calculated. The proposed algorithm during the first phase uses this estimated PER to select the potential nodes for a time slot. Whereas in the second phase, based on the nodes priority and the data packets availability among the potential candidates, finally a slot is assigned to one node. This process is iterated by the coordinating node until the end of a super frame.
Results
The proposed scheduling scheme has a significant gain over a reference scheme (i.e., without dynamic adaptation). On average, 20-to-55 percent extra packets are received, along with 1-to-5 joules of energy savings though at the cost of higher delay ranging from 20-to-200 ms while operating at low power levels (i.e., 0 dBm, -5 dBm, -10 dBm). It is recommended that the future wearable IEEE 802.15.6 compliant transceivers can successfully operate at -5 dBm to -8 dBm of transmission power; further reducing the power levels under dynamic environment can degrade the performance. It is also observed that the achievable throughput of different time varying links is good under realistic conditions until the data packet generation rate is higher than 100 ms. Acknowledgment: The work was supported by NPRP grant #[6-1508-2-616] from the Qatar National Research Fund which is a member of Qatar Foundation. The statements made herein are solely the responsibility of the authors.
References
[1] “Facts and statistics on Wearable Technology,” 2015. [Online]. Available: http://www.statista.com/topics/1556/wearable-technology/.
[2] M. M. Alam and E. B. Hamida, “Surveying Wearable Human Assistive Technology for Life and Safety Critical Applications: Standards, Challenges and Opportunities,” MDPI Journal on Sensors, vol. 14, no. 5, pp. 9153–9209, 2014.
[3] “802.15.6-2012 - IEEE Standard for Local and metropolitan area networks - Part 15.6: Wireless Body Area Networks,” 2012. [Online]. Available: https://standards.ieee.org/findstds/standard/802.15.6-2012.html.
[4] M. M. Alam and E. B. Hamida, “Strategies for Optimal MAC Parameters Tuning in IEEE $802.15.6$ Wearable Wireless Sensor Networks,” Journal of Medical Systems, vol. 39, no. 9, pp. 1–16, 2015.
[5] M. Alam and E. BenHamida, “Performance evaluation of IEEE 802.15.6 MAC for WBSN using a space-time dependent radio link model,” in IEEE 11th AICCSA Conference, Doha, 2014.
-
-
-
Real-Time Location Extraction for Social-Media Events in Qatar
1.Introduction
Social media gives us instant access to a continuous stream of information generated by users around the world. This enables real-time monitoring of users’ behavior (Abbar et al., 2015), events’ life-cycles (Weng and Lee. 2010), and large-scale analysis of human interactions in general. Social media platforms are also used to propagate influence, spread content, and share information about events happening in real-time. Detecting the location of events directly from user-generated text can be useful in different contexts, such as humanitarian response, detecting the spread of diseases, or monitoring traffic. In this abstract, we define a system that can be used for any of the purposes described above, and illustrate its usefulness with an application for locating traffic-related events (e.g., traffic jams) in Doha.
The goal of this project is to design a system that, given a social-media post describing an event, predicts whether or not the event belongs to a specific category (e.g., traffic accidents) within a specific location (e.g. Doha). If the post is found to belong to the target category, the system proceeds with the detection of all possible mentions of locations (e.g. “Corniche”, “Sports R/A”, “Al Luqta Street.”, etc.), landmarks (“City Center”, “New Al-Rayyan gas station”, etc.), and location expressions (e.g. “On the Corniche between the MIA park and the Souq)”. Finally, the system geo-localizes (i.e. assigns latitude and longitude coordinates) to every location expression used in the description of the event. This makes it useful for placing the different events onto a map; a downstream application will use these coordinates to monitor real-time traffic, and geo-localize traffic-related incidents.
2.System Architecture
In this section we present an overview of our system. We first describe its general “modular” architecture, and then proceed with the description of each module.
2.1. General view
The general view of the system is depicted in Figure 1. The journey starts by listening to some social media platforms (e.g., Twitter, Instagram) to catch relevant social posts (e.g., tweets, check-ins) using a list of handcrafted keywords related to the context of the system (e.g., road traffic). Relevant posts are then pushed through a three-steps pipeline in which we double-check the relevance of the post using an advanced binary classifier (Content Filter). We then extract location names mentioned in the posts if any. Next, we geo-locate the identified locations to their accurate placement on the map. This process allow to filter undesirable posts, and augment the relevant once with precise geo-location coordinates which are finally exposed for consumption via a restful API. We provide below details on each of the aforementioned modules.
Figure 1: Data processing pipeline.
2.2. Content filter
The Content Filter consists of a binary classifier that, given a tweet deemed to be about Doha, decides whether the tweet is a real-time report about traffic in Doha or not. The classifier receives as input tweets that have been tweeted from a location enclosed in a geographic rectangle (or bounding box) that roughly corresponds to Doha, and that contain one or more keywords expected to refer to traffic-related events (e.g., “accident”, “traffic”, “jam”, etc.). The classifier is expected to filter out those tweets that are not real-time reports about traffic (e.g., tweets that mention “jam”’ as a type of food, tweets that complain about the traffic in general, etc.). We build the classifier using supervised learning technology; in other words, a generic learning process learns, from a set of tweets that have been manually marked as being either real-time reports about traffic or not, the characteristics that a new tweet should have in order to be considered a real-time report about traffic. For our project, 1000 tweets have been manually marked for training purposes. When deciding about a new tweet, the classifier looks for “cues” that, in the training phase, have been found to be “discriminative”, i.e., helpful in taking the classification decision. In our project, we used the Stanford Maximum Entropy Classifier (Manning and Klein, 2003) to perform the discriminative training. In order to generate candidate cues, the tweet is preprocessed via a pipeline of natural language analysis tools, including a social-media-specific tokenizer (O'Connor et al., 2010) which splits words, and a rule-based Named-Entity Simplifier which substitutes mentions of local entities by their corresponding meta-categories (for example, it substitutes “@moi_qatar” or “@ashghal” for “government_entity”).
2.3.NLP components
The Location Expression Extractor is a module that identifies (or extracts) location expressions, i.e., natural language expressions that denote locations (e.g., “@ the Slope roundabout”, “right in front of the Lulu Hypermarket”, “on Khalifa”, “at the crossroads of Khalifa and Majlis Al Taawon”, etc.). A location expression can be a complex linguistic object, e.g., “on the Corniche between the MIA and the underpass to the airport”. A key component of the Location Expression Extractor is the Location Named Entity Extractor, i.e., a module that identifies named entities of Location type (e.g. “the Slope roundabout”) or Landmark type (e.g., “the MIA”). For our purposes, a location is any proper name in the Doha street system (e.g., “Corniche”, “TV roundabout”, “Khalifa”, “Khalifa Street”); landmarks are different from locations, since the locations are only functional to the Doha street system, while landmarks have a different purpose (e.g., the MIA is primarily a museum, although its whereabouts may be used as a proxy of a specific location in the Doha street system – i.e., the portion of the Corniche that is right in front of it).
The Location Named Entity Extractor receives as input the set of tweets that have been deemed to be about some traffic-related event in Doha, and returns the same tweet where named entities of type Location or of type Landmark have been marked as such. We generate a Location Named Entity Extractor via (again) supervised learning technology. In our system, we used the Stanford CRF-based Named Entity Recognizer (Finkel et al., 2005) to recognize named entities of type Location or of type Landmark using a set of tweets where such named entities have been manually marked. From these “training” tweets the learning system automatically recognizes the characteristics that a natural language expression should have in order to be considered a named entity of type Location or of type Landmark. Again, the learning system looks for “discriminative” cues, i.e., features in the text that may indicate the presence of one of the sought named entities. To improve the accuracy over tweets, we used a tweet-specific tokenizer (O'Connor et al., 2010), a tweet-specific Part-of-Speech tagger (Owoputi et al., 2013) and an in-house gazetteer of locations related to Qatar.
2.4.Resolving location expression onto the map
Once location entities are extracted using the NLP components, we use the APIs of Google, Bing and Nominatim to request the geographic coordinates of the map location entities into geographic coordinates. Each location entity is geo-coded by the Google Geolocation API, Bing Maps REST API and Nomination gazetteer individually. We use multiple geo-coding sources to increase the robustness of our application, as a single API might fail to retrieve geo-coding data. Given a location entity, the result of the geo-coding retrieval is formatted as a JSON object containing the name of the location entity, its address, and the corresponding geo-coding results from Bing, Google or Nominatim. The geo-coding process is validated by comparing the results of the different services used. We first make sure that the location returned falls within Qatar's bounding box. We then compute the pairwise distance between the different geographic coordinates to ensure their consistency.
2.5.Description of the Restful API
In order to ease the consumption of the relevant geo-located posts and make it possible to integrate these posts in a comprehensive way with other platforms, we have built a Restful API. In the context of our system, this refers to using HTTP verbs (GET, POST, PUT) to retrieve relevant social posts stored by our back-end processing.
Our API exposes two endpoints: Recent and Search. The former endpoint provides an interface to request the latest posts identified by our system. It supports two parameters: Count (maximum number of posts to return) and Language (the language of posts to return i.e., English or Arabic.) The later endpoint enables querying the posts for specific keywords and return only posts matching them. This endpoint supports three parameters: Query (list of keywords), Since (date-time of the oldest post to retrieve), From-To (two date-time parameters to express the time interval of interest.) In the case of a road traffic application, one could request tweets about “accidents” that occurred in West-Bay since the 10th of October.
3.Target: single architecture for multiple applications
Our proposed platform is highly modular (see Figure 1). This guarantees that relatively simple changes in some modules can make the platform relevant to any applicative context where locating user messages on a map is required. For instance, the content classifier – the first filtering element in the pipeline – can be oriented to mobility problems in a city: accident or congestion reporting, road blocking or construction sites, etc. With the suitable classifier, our platform will collect traffic and mobility tweets, and geo-locate them when possible. However, there are many other contexts in which precise location is needed. For instance, in natural disaster management, it is well admitted that people involved in catastrophic events (floods, typhoons, etc.) use social media as a means to create awareness, demand help or medical attention (Imran et al., 2013). Quite often, these messages may contain critical information for relief forces, who may not have enough knowledge of the affected place and/or accurate information of the level of damage in buildings or roads. Often, the task to read, locate on a map and mark is crowd-sourced to volunteers; we foresee that, in such time-constrained situations, our proposed technology would represent an advancement. Likewise, the system may be oriented towards other applications: weather conditions, leisure, etc.
4.System Instantiation
We have instantiated the proposed platform to the problem of road traffic in Doha. Our objective is to sense in real-time the traffic status in the city using social media posts only. Figure 2 shows three widgets of the implemented system. First, the Geo-mapped Tweets Widget shows a Doha map with different markers: the yellow markers symbolize the tweets geo-located by the users, the red markers represent the tweets geo-located by our system; the large markers come from tweets that have an attached photo, while the small markers represent the text-only tweets. Second, the Popular Hashtags Widget illustrates hashtags mentioned by the users, where the large font size shows the most frequent one. Third, the Tweets Widget lists the traffic-related tweets which are collected by our system.
Figure 2: Snapshot of some System's frontend widgets.
5.References
-
-
-
Sentiment Analysis in Comments Associated to News Articles: Application to Al Jazeera Comments
Authors: Khalid Al-Kubaisi, Abdelaali Hassaine and Ali JaouaSentiment analysis is a very important research task that aims at understanding the general sentiment of a specific community or group of people. Sentiment analysis of Arabic content is still in its early development stages. In the scope of Islamic content mining, sentiment analysis helps understanding what topics Muslims around the world are discussing, which topics are trending and also which topics will be trending in the future.
This study has been conducted on a dataset of 5000 comments on news articles collected from Al Jazeera Arabic website. All articles were about the recent war against the Islamic State. The database has been annotated using Crowdflower which is website for crowdsourcing annotations of datasets. Users manually selected whether the sentiment associated with the comment was positive or negative or neutral. Each comment has been annotated by four different users and each annotation is associated with a confidence level between 0 and 1. The confidence level corresponds to whether the users who annotated the same comment agreed or not (1 corresponds to full agreement between the four annotators and 0 to full disagreement).
Our method represents the corpus by a binary relation between the set of comments (x) and the set of words (y). A relation exists between the comment (x) and the word (y) if, and only if, (x) contains (y). Three binary relations are created for comments associated with positive, negative and neutral sentiments. Our method then extracts keywords from the obtained binary relations using the hyper concept method [1]. This method decomposes the original relation into non-overlapping rectangles and highlights for each rectangle the most representative keyword. The output is a list of keywords sorted in a hierarchical ordering of importance. The obtained keyword list associated with positive, negative and neutral comments are fed into a random forest classifier of 1000 random trees in order to predict the sentiment associated with each comment of the test set.
Experiments have been conducted after splitting the database into 70% training and 30% testing subsets. Our method achieves a correct classification rate of 71% when considering annotations with all values of confidence and even 89% when only considering the annotation with a confidence value equal to 1. These results are very promising and testify of the relevance of the extracted keywords.
In conclusion, the hyper concept method extracts discriminative keywords which are used in order to successfully distinguish between comments containing positive, negative and neutral sentiments. Future work includes performing further experiments by using a varying threshold level for the confidence value. Moreover, by applying a part of speech tagger, it is planned to perform keyword extraction on words corresponding to specific grammatical roles (adjectives, verbs, nouns… etc.). Finally, it is also planned to test this method on publicly available datasets such as the Rotten Tomatoes Movie Reviews dataset [2].
Acknowledgment
This contribution was made possible by NPRP grant #06-1220-1-233 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.
References
[1] A. Hassaine, S. Mecheter, and A. Jaoua. “Text Categorization Using Hyper Rectangular Keyword Extraction: Application to News Articles Classification.” Relational and Algebraic Methods in Computer Science. Springer International Publishing, 2015. 312–325.
[2] B. Pang and L. Lee. 2005. “Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales”. In ACL, pages 115–124.
-
-
-
Flight Scheduling in the Airspace
Authors: Mohamed Kais Msakni, Mohamed Kharbeche, Mohammed Al-Salem and Abdelmagid HammudaThis paper addresses an important problem in the aircraft traffic management caused by the rapid growth of air traffic. The air route traffic control center has to deal with different plans of airlines in which they specify a requested entry time of their aircraft to the airspace. Each flight has to be assigned to a track and a level in order to ensure the Federal Aviation Administration (FAA) safety standards. When two flights are assigned to the same track and level, a minimum separation time has to be ensured. If this condition could not be satisfied, one of the flight will be delayed. This solution is undesirable for many reasons such as missing the connecting flight, decrease in the passengers' satisfaction, etc.
The problem of track-level scheduling can be defined as follows. Given a set of flights, each flight has to be assigned to one track and one level. To ensure the separation time between two flights assigned to the same track and level, it is possible to delay the requested departure time of a flight. The objective is to minimize the overall flight delay.
To deal with this problem, we propose a mixed integer programming formulation to find a flight plan that minimizes the objective function, while ensuring the FAA safety standards. In particular, this model considers an aircraft-dependent separation time: the separation time depends on the type of the aircraft assigned to the same track and level. However, some problems are too large to be solved in a reasonable time with the proposed model using a commercial solver. In this study, we developed a scatter search (SS) to deal with larger instances. SS is an evolutionary heuristic and the feature to be a problem-independent structure. This metaheuristic has been efficiently applied to a variety of optimization problems. Initially, SS starts with a set of solutions (reference set) that is constantly updated through two procedures (solution generation and combination) in the aim to produce high-quality solutions.
In order to assess the quality of the exact method and the scatter search, we carried out an experimental study on a set of instances that are generated from a real case data. This includes small (80 to 120 flights), medium (200 to 220 flights), and large (400 to 420 flights) instances. The mathematical model has been solved using CPLEX 12.6 and the scatter search has been coded using C language under Microsoft Visual Studio v12 environment. The tests were conducted under a Windows 7 machine with an Intel Core i7 and 8 GB of RAM. The model was tested on each instance with 1 hour time limit. The results show that no instances have been solved to optimality. For small instances, the model and the scatter search provide comparable results; however, for medium and large instances, scatter search gives the best results.
This conference was made possible by the UREP award [UREP 13 - 025 - 2 - 010] from the Qatar National Research Fund (a member of The Qatar Foundation).
-
-
-
Named Entity Disambiguation using Hierarchical Text Categorization
Authors: Abdelaali Hassaine, Jameela Al Otaibi and Ali JaouaNamed entity extraction is an important step in natural language processing. It aims at finding the entities which are present in text such as organizations, places or persons. Named entities extraction is of a paramount importance when it comes to automatic translation as different named entities are translated differently. Named entities are also very useful for advanced search engines which aim at searching for a detailed information regarding a specific entity. Named entity extraction is a difficult problem as it usually requires a disambiguation step as the same word might belong to different named entities depending on the context.
This work has been conducted on the ANERCorp named entities database. This Arabic database contains four different named entities: person, organization, location and miscellaneous. The database contains 6099 sentences, out of which 60% are used for training 20% for validation and 20% for testing.
Our method for named entity extraction contains two main steps: the first step predicts the list of named entities which are present at the sentence level. The second step predicts the named entity of each word of the sentence.
The prediction of the list of named entities at the sentence level is done through separating the document into sentences using punctuation marks. Subsequently, a binary relation between the set of sentences (x) and the set of words (y) is created from the obtained list of sentences. A relation exists between the sentence (x) and the word (y) if, and only if, (x) contains (y). A binary relation is created for each category of named entities (person, organization, location and miscellaneous). If a sentence contains several named entities, it is duplicated in the relation corresponding to each one of them. Our method then extracts keywords from the obtained binary relations using the hyper concept method [1]. This method decomposes the original relation into non-overlapping rectangles and highlights for each rectangle the most representative keyword. The output is a list of keywords sorted in a hierarchical ordering of importance. The obtained keyword list associated with each category of named entities are fed into a random forest classifier of 10000 random trees in order to predict the list of named entities associated with each sentence. The random forest classifier produces for each sentence the list of probabilities corresponding to the existence of each category of named entities within the sentence.
Random Forest [sentence(i)] = (P(Person),P(Organization),P(Location),P(miscellaneous)).
Subsequently, the sentence is associated with the named entities for which the corresponding probability is larger than a threshold set empirically on the validation set.
In the second step, we create a lookup table associating to each word in the database, the list of named entities to which it corresponds in the training set.
For unseen sentences of the test set, the list of named entities predicted at the sentence level is produced, and for each word, the list of predicted named entities is also produced using the lookup table previously built. Ultimately, for each word, the intersection between the two predicted lists of named entities (at the sentence and the word level) will give the final predicted named entity. In the case where more than one named entity is produced at this stage, the one with the maximum probability is kept.
We obtained an accuracy of 76.58% when only considering lookup tables of named entities produced at the word level. When performing the intersection with the list produced at the sentence level the accuracy reaches 77.96%.
In conclusion, the hierarchical named entity extraction leads to improved results over direct extraction. Future work includes the use of other linguist features and larger lookup table in order to improve the results. Validation on other state of the art databases is also considered.
Acknowledgements
This contribution was made possible by NPRP grant #06-1220-1-233 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.
Reference
[1] A. Hassaine, S. Mecheter, and A. Jaoua. “Text Categorization Using Hyper Rectangular Keyword Extraction: Application to News Articles Classification”. Relational and Algebraic Methods in Computer Science. Springer International Publishing, 2015. 312–325.
-
-
-
SWIPT MIMO Relaying in with Spectrum Sharing Networks with Interference Cancellation
More LessSimultaneous wireless information and power transfer (SWIPT) is a promising solution to increase the lifetime of wireless nodes and hence alleviate the energy bottleneck of energy constrained wireless networks. To recent days, there are three different designs of SWIPT system which includes integrated SWIPT, closed-loop SWIPT, decoupled SWIPT. Integrated SWIPT is the simplest design where power and information are extracted by the mobile from the same modulated microwave transmitted by a base station (BS). For this scheme, the information transfer (IT) and power transfer (PT) distances are equal. For the closed loop scenario, it splits IT and PT between uplink and downlink wherein PT is in downlink and IT is for dedicated for uplink. The last one is to add additional special base station (BS) called a power beacon PB in which PT and IT are orthogonalized by using different frequency bands or time slots to avoid interference. Therefore, powering a cognitive radio networks through RF energy harvesting can be efficient in terms of spectrum usage and energy limits for wireless networking. The RF energy harvesting technique also is applicable in cooperative networks wherein an energy constrained relay with limited battery depends on external charging mechanism to assist the transmission of source information to the destination. In an effort to further improve spectrum sharing network performance, a number of works has suggested the idea of incorporating the multiple antenna technique into cognitive relaying. In particular, transmit antenna selection with receive maximal ratio combining (TAS/MRC) is adopted as a low complexity and power efficient approach which achieves full transmit/receive diversity.
Since the SUs and PUs share the same frequency band, there will be inevitably interference between the SUs and PUs. Therefore, reducing the effect of PU interference on the performance of secondary receiver is of significance important. Consequently, smart antennas can be employed to mitigate the PU interference. With knowledge of the direction of arrival (DoA), the receive radiation pattern can be shaped to place deep nulls in the directions of some of interfering signals. By doing so, two null-steering algorithms were proposed in the literature i.e., dominant interference reduction algorithm, and adaptive arbitrary interference reduction algorithm. The first algorithm requires perfect predication and statistical ordering of the interference signals instantaneous power, and the later algorithm does not need prior knowledge of the statistical properties of interfering signals. In this work, we limit our analysis to the dominant interference reduction algorithm.
In this work, we consider a dual-hop relaying with amplify-and-forward (AF) scheme where the source, relay, and the destination are equipped with multiple antennas. The relay node is experiencing co-channel interference. The purpose of array processing at the relay is to provide interference cancellation. Therefore, the energy constrained relay collects energy from ambient RF signals, cancel CCI, and then forward the information to the destination. In particular, we provide a comprehensive analysis for the system assuming selection at the source, and the destination. We derive the end-to-end exact and asymptotic outage probability for the proposed system model. A key parameters are also obtained featuring the diversity and coding gains.
-
-
-
Action Recognition in Spectator Crowds
Authors: Arif Mahmood and Nasir RajpootAction Recognition in Spectator Crowds
During the Football Association competitions held in 2013 in UK, 2,273 people were arrested due to the events of lawlessness and disorder, according to the statistics collected by the UK Home Office [1]. According to a survey on the major soccer stadium disasters around the world, more than 1500 people have died and more than 5000 are injured since 1902 to 2012 [2]. Therefore understanding spectator crowd behaviour is an important problem for public safety management and for the prevention of dangerous activities.
Computer Vision is the platform used by researchers for efficient crowd management research through video cameras. However most of the research efforts primarily show results on protest crowds or casual crowds while the spectator crowds have not been focussed. On the other hand the action recognition research has mostly addressed actions performed by one or two actors while the actions performed by individuals in the dense spectator crowds has not been addressed and is still an unsolved problem.
Action recognition in dense crowds pose very difficult challenges mostly due to the low resolution of subjects and significant variations in the action performance by the same individuals. Also different individuals perform the same action quite differently. Spatial distribution of performers varies with time. Scene contains multiple actions at the same time. Thus compared to the single actor action recognition, noise and outliers are significantly large and action start and stop are not well defined making action recognition very difficult.
In this work we target to recognize the actions performed by individuals in spectator crowds. For this purpose we consider a recently released dataset consisting of spectators in the 26th Winter Universiade held in Italy in 2013 [3]. Data was collected during the last four matches held in the same ice stadium using 5 cameras. Three high resolution cameras focussed on different parts of the spectator crowd with 1280 × 1024 pixel resolution and 30 fps temporal resolution. Figure 1 shows an example spectator crowd dataset image.
For action recognition in the spectator crowds, we purpose to compute dense trajectories in the crowd videos by using optical flow [4]. Trajectories are initiated on a dense grid and the starting points satisfy a quality measure based on KLT feature tracker (Fig. 2). Trajectories exhibiting motion lower than a minimum threshold are discarded. Along each trajectory shape and texture is encoded using Histograms of Oriented Gradients (HOG) features [5] and motion is encoded using Histogram of Flow (HOF) features [6]. The resulting feature vectors are grouped using the person bounding boxes provided in the dataset (Fig. 4). Note that person detectors which are especially designed for detection and segmentation of persons in dense crowds can also be used for this purpose [7].
All trajectories corresponding to a particular person are considered to encode the actions performed by that person. These trajectories are divided into overlapped temporal windows of width 30 frames (or 1.00 second time). Two consecutive windows has an overlap of 66%. Each person-time window is encoded using bag-of-words technique as explained below.
The S-HOCK dataset contains 15 videos of spectator crowds. For the purpose of training we use 10 videos and the remaining 5 videos are used for testing. From the training videos 100,000 trajectories are randomly sampled and grouped to 64 clusters using k-means algorithm. Each cluster center is considered as an item in the code-book. Each trajectory in a person-time group of trajectories is encoded using this code-book. This encoding is performed in the training as well as the test videos using bag-of-words approach. The code-book is considered as a part of the training process and saved.
For the purpose of bag-of-words encoding, distance of each trajectory in the person-time trajectory group is measured from all items in the code-book. Here we follow two approaches. In the first approach, only one vote is casted at the index corresponding to the best matching code-book item. In the second approach, 5 votes are casted corresponding to the 5 best matching code-book items. These votes are given weights inversely proportional to the distance of trajectory from each of the five best matching code-book items.
In our experiments we observe better action recognition performance of the multi-voting strategy compared to the single weight scheme. It is because more information is captures in the multi-voting strategy. In the SHOCH dataset, each person is manually labelled as performing one of the 23 actions, including the ‘other’ action which covers all actions not included in the first 22 categories (Fig. 3). Each person-time group of trajectories is given an action label from the dataset. Once this group is encoded using code-book, it becomes a single vector histogram. Each of these vectors is given the same action label depending upon the label assigned to the corresponding person-time trajectory group.
The labelled vectors obtained from the training dataset are used to train both linear and kernel SVM using one verses all strategy. The labels of the vectors in the test data are used as ground truth and the learned SVM are used to predict the label of each test vector independently. The predicted labels are then compared with the ground truth labels to establish action recognition accuracy. We observe an accuracy increase of 3% to 4% when SVM with Gaussian RBF was used. Results are shown in Table 1 and precision recall curves are shown in Figs. 5 & 6.
In our experiments we observe that applauding and shaking flag actions have obtained more accuracy compared with other actions in the dataset (Table 1). It is mainly because of the fact that these actions have higher frequency and consist of significant discriminative motion. While other actions have low frequency of occurrence and also in some actions the motion is not discriminative. For example in using device action, when someone in the crowd use a mobile phone or a camera, the motion based detection is not very efficient.
References
[1]Home Office and The Rt Hon Mike Penning MP, “Football-related arrests and banning orders, season 2013 to 2014”, published 11 September 2014.
[2]Associated Press, “Major Soccer Stadium Disasters”, The Wall Street Journal (World), published 1 February 2012.
[3]Conigliaro, Davide, et al. “The SHock Dataset: Analyzing Crowds at the Stadium.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015.
[4]Wang, Heng, et al. “Action recognition by dense trajectories.” Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. IEEE, 2011.
[5]Dalal, Navneet, and Bill Triggs. “Histograms of oriented gradients for human detection.” Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on. Vol. 1. IEEE, 2005.
[6]Dalal, Navneet, Bill Triggs, and Cordelia Schmid. “Human detection using oriented histograms of flow and appearance.” Computer Vision–ECCV 2006. Springer Berlin Heidelberg, 2006. 428–441.
[7]Idrees, Haroon, Khurram Soomro, and Mubarak Shah. “Detecting Humans in Dense Crowds using Locally-Consistent Scale Prior and Global Occlusion Reasoning.” IEEE TPAMI 2015.
-
-
-
Plasmonic Modulator Based on Fano Resonance
The field of plasmonics is continuously attracting research in the area of integrated photonics development, to highly integrate photonic components, devices and detectors in a single photonic chip, just as electronic chips containing many electronic components. The interesting properties that plasmonics offer include the electromagnetic fields enhancement and the confinement of the propagating surface plasmon polaritons to sub-100 nm size features at metal dielectric interfaces. Thereby, the field of plasmonics is very promising for minimizing the photonic components to smaller sizes that cannot be experienced using conventional optics and in particular the Silicon photonics industry. Many applications based on plasmonics are being increasingly developed and studied such as electromagnetic field enhancement for surface enhanced spectroscopy, wave guiding, sensing, modulation, switching, and photovoltaic applications.
We hereby propose a novel compact plasmonic resonator that can be utilized for different applications that depend on optical resonance phenomena in the Near Infrared spectral range, a very interesting range for a variety of applications, including sensing and modulation. The resonator structure consists of a gold layer which is etched to form a metal-insulator-metal wave guide and a rectangular cavity. The rectangular cavity and the wave guide are initially treated as a dielectric material. The strong reflectivity of gold at frequencies higher than the plasma frequency is the origin of the Fabry Perot resonator behavior of the rectangular cavity. The fano resonance was produced successfully and controlled by varying the rectangular cavity dimensions. The fano profile is generated in as the result of the redistribution of the electromagnetic field in the rectangular cavity as depicted by the plasmonic mode distribution in the resonator. The fano resonance is characterized by its sharp spectral line which attracts applications requiring sharp spectral line shapes such as sensing and modulation applications.
Optical modulators are key components in the modern communication technology. The research trend on optical modulators aims at achieving compact designs, low power consumption and large bandwidth operation. Plasmonic modulators emerge as promising devices since they can have high modulation speeds and very compact designs.
The operation mechanism of our introduced plasmonic modulator is as follows: instead of a constant refractive index dielectric, an electro-optic polymer that has its refractive index is dependent on some controlled phenomena, is filled in the metal insulator metal waveguide and the rectangular cavity. Efficient modulation was achieved by changing the applied voltage (DC Signal) on the metal contacts which then changes the refractive index of the polymer, thereby shifting the resonant wavelength position in the resonator, leading to signal modulation. Our modulator is operational at the telecom wavelength 1.55 μm, thereby suitable for the modern communication technology.
Finite Difference Time Domain (FDTD) simulations were conducted to design the modulator structure and run the simulations experiments, and to study the resonance effects of the structure and optimize its response to the desired results, the most important results however are the efficient modulation of the optical energy at the wavelengths required in the modern communication technology, around 1.5 μm, all results were carried on using the commercially available Lumerical FDTD software.
-
-
-
Conceptual-based Functional Dependency Detection Framework
By Fahad IslamNowadays, knowledge discovery from data is one of the challenging problems, due to its importance in different fields such as; biology, economy and social sciences. One way of extracting knowledge from data can be achieved by discovering functional dependencies (FDs). FD explores the relation between different attributes, so that the value of one or more attributes is determined by another attribute set [1]. FD discovery helps in many applications, such as; query optimization, data normalization, interface restructuring, and data cleaning. A plethora of functional dependency discovery algorithms has been proposed. Some of the most widely used algorithms are; TANE [2], FD_MINE [3], FUN [4], DFD [5], DEP-MINER [6], FASTFDS [7] and FDEP [8]. These algorithms extract FDs using different techniques, such as; (1) building a search space of all attributes combinations in an ordered manner, then start searching for candidate attributes that are assumed to have functional dependency between them, (2) generating agreeing and difference sets, where the agreeing sets are acquired through applying cross product of all tuples, the difference sets are the complement of the agreeing sets, both sets are used to infer the dependencies, (3) generating one generic set of functional dependency, in which each attribute can determine all other attributes, this set is then updated and some dependencies are removed to include more specialized dependencies through records pairwise comparisons.
Huge efforts have been dedicated to compare the most widely used algorithms in terms of runtime and memory consumption. No attention has been paid to the accuracy of resultant set of functional dependencies represented. Functional dependency accuracy is defined by two main factors; being complete and minimal.
In this paper, we are proposing a conceptual-based functional dependency detection framework. The proposed method is mainly based on Formal Concept Analysis (FCA); which is a mathematical framework rooted in lattice theory and is used for conceptual data analysis where data is represented in the form of a binary relation called a formal context [9]. From this formal context, a set of implications is extracted, these implications are in the same form of FDs. Implications are proven to be semantically equivalent to the set of all functional dependencies available in the certain database [10]. This set of implications should be the smallest set representing the formal context which is termed the Duquenne–Guigues, or canonical, basis of implications [11]. Moreover, completeness of implications is achieved through applying Armstrong rules discussed in [12].
The proposed framework is composed of three main components; they are:
Data transformation component: it converts input data to binary formal context.
Reduction component: it applies data reduction on tuples or attributes.
Implication extraction component: this is responsible for producing minimal and complete set of implications.
The key benefits of the proposed framework:
1 It works on any kind of input data (qualitative and quantitative) that is automatically transformed to a formal context of binary relation,
2 A crisp Lukasiewicz data reduction technique is implemented to remove redundant data, which positively helps reducing the total runtime,
3 The set of implications produced are guaranteed to be minimal; due to the use of Duquenne–Guigues algorithm in extraction,
The set of implications produced are guaranteed to be complete; due to the use of Armstrong rules.
The proposed framework is compared to the seven most commonly used algorithms listed above and evaluated based on runtime, memory consumption and accuracy using benchmark datasets.
Acknowledgement
This contribution was made possible by NPRP-07-794-1-145 grant from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.
-
-
-
An Arabic Text-to-Picture Mobile Learning System
Authors: AbdelGhani Karkar, Jihad Al Ja'am and Sebti FoufouHandled devices and software applications are susceptible to ameliorate learning strength, awareness, and career development. Many mobile-based learning applications are obtainable from the market but Arabic learning shortage is not taken in consideration. We conduct an Arabic Text-to-Picture (TTP) mobile educational application which performs knowledge extraction and concept analysis to generate pictures that represent the content of the Arabic text. The knowledge extraction is based on Arabic semantic models cover important scopes for young children and new Arabic learners (i.e., grammar, nature, animals). The concept analysis uses semantic reasoning, semantic rules, and Arabic natural text processing (NLP) tool to identify word-to-word relationships. The retrieval of images is done spontaneously from local repository and online search engine (i.e., Google or Bing). The instructor can select the Arabic educational content, get semi-automatic generated pictures, and use them for explanation. Preliminary results show improvement in Arabic learning strength and memorization.
Keywords
Mobile Learning, Natural Language Processing, Ontology, Multimedia Learning, Engineering Education.
I. Introduction
Nowadays, mobile learning environment has been excessively used in diverse fields and has become a common matter in educational movement. In such an environment, learners are able to reach online educational materials from any location. Learners of Arabic language suffer from the lack of adequate resources. In fact, most of the educational software, tools, and web sites use classical techniques of introducing the concepts and explaining the vocabulary. We present in this paper a text-to-picture (TTP) educational mobile system that promotes Arabic children stories through semi-automatic generated pictures to illustrate their contents in an attractive manner. Preliminary results show that the system enhances the Arabic learners' comprehension, deduction and realization.
II. Background
Natural language processing (NLP) stresses the extraction of useful information and mining natural text. These information can be used to identify the scope of the text in order to generate summaries, classify contents and teach vocabulary. Diverse NLP-based systems that illustrate the text to images have been developed recently [1, 2, 3]. In general, these systems divide the text into segments and single words, access local multimedia resources, or explore the web to get pictures and images to illustrate the content.
All the proposed systems and techniques do not include the Arabic language. In this paper, we propose an Arabic TTP educational system using multimedia technology to teach children in an attractive way. Our proposal generates the multimedia tutorials dynamically by using Arabic text processing, entities relationship extraction, multimedia ontology, and online extraction of multimedia contents fetched from Google search engine.
III. Methodology:
In order to develop our system, we have created first the general system artwork, set the end user graphical user interface, design the semantic model that will store all semantic information about terms, and collect educational stories and analyze them. We have gathered 30 educational stories, annotated terms, and associated illustrations manually. Illustrations were gathered from the Internet and educational repository. The semantic model is developed using “Protégé editor”, a free open source ontology editor developed by Stanford [4]. The semantic model is composed from many classes that are referred to as concepts.
IV. The proposed system
The proposed system is a client-server application. When the server launched, it loads its packages and components, it loads the defined ontology, text parser components, and finally it opens a connection to listen for users' requests. Upon an effective connection trial, the user will be eligible to enter or open existing Arabic stories and process them. On the client side, the processing request and response of the story is done in a different thread, to keep the user able to continue his work without any interruption. Finally, server reply will be displayed for the user on his mobile device which consists from the processed Arabic story, related images, and different questions about an animal.
V. Conclusion
This study presents a complete system that automatically generates illustrations for Arabic stories through text processing, Arabic ontology, relationship extraction, and illustration generation. The proposed system belongs to learning technology which can be on mobile devices to teach children in an attractive and non-traditional style. Preliminary results demonstrate that the system improved learners' comprehension and realization.
References
[1] Bui, Duy, Carlos Nakamura, Bruce E Bray, and Qing Zeng-Treitler, “Automated illustration of patients instructions,” in AMIA Annual Symposium Proceedings, vol. 2012, pp. 1158, 2012.
[2] Li, Cheng-Te, Chieh-Jen Huang, and Man-Kwan Shan, “Automatic generation of visual story for fairy tales with digital narrative,” in Web Intelligence, vol. 13, pp. 115–122, 2015.
[3] Ustalov, Dmitry and R Kudryavtsev, “An Ontology Based Approach to Text to Picture Synthesis Systems,” in Proceedings of the 2nd International Workshop on Concept Discovery in Unstructured Data (CDUD 2012), 2012.
[4] Protégé. Ontology Editor Software. Available from: http://protege.stanford.edu, Accessed: September 2015.
-
-
-
Discovering the Truth on the Web Data: One Facet of Data Forensics
Authors: Mouhamadou Lamine Ba, Laure Berti-Equille and Hossam M. HammadyData Forensics with Analytics, or DAFNA for short, is an ambitious project initiated by the Data Analytics Research Group in Qatar Computing Research Institute, Hamad Bin Khalifa University. It main goal is to provide effective algorithms and tools for determining the veracity of structured information when they originate from multiple sources. The ability to efficiently estimate the veracity of data, along with the reliability level of the information sources, is a challenging problem with many real-world use cases (e.g., data fusion, social data analytics, rumour detection, etc.) in which users rely on a semi-automated data extraction and integration process in order to consume high quality information for personal or business purposes. DAFNA's vision is to provide a suite of tools for Data Forensics and investigate various research topics such as fact-checking and truth discovery and their practical applicability. We will present our ongoing development (dafna.qcri.org) on extensively comparing the state-of-the-art truth discovery algorithms, releasing a new system and the first REST API for truth discovery, and designing a novel hybrid truth discovery approach using active ensembling. Finally, we will briefly present real-world applications of truth discovery from Web data.
Efficient Truth Discovery. Truth discovery is a hard problem to deal with since there is no a priori knowledge about the veracity of provided information and the reliability level of online sources. This raises many questions about a thorough understanding of the state-of-the-art truth discovery algorithms and their applicability for actionable truth discovery. A new truth discovery approach is needed and it should be rather comprehensible and domain-independent. In addition, it should take advantage of the benefits of existing solutions, while being built on realistic assumptions for an easy use in real-world applications. In this context, we propose an approach that deals with open truth discovery challenges and consists of the following contributions: (i) The thorough comparative study of existing truth discovery algorithms; (ii) The design and release of the first online truth discovery system and the first REST API for truth discovery available at dafna.qcri.org; (iii) An hybrid truth discovery method using active ensembling; and (iv) An application to query answering related to Qatar where the veracity of information provided by multiple Web sources is estimated.
-
-
-
Identifying Virality Attributes of Arabic Language News Articles
Authors: Sejeong Kwon, Sofiane Abbar and Bernard J. JansenOur research is focused on expanding the reach and impact of Arabic language news articles by attracting more readers. In pursuit of this research goal, we analyze attributes that result in certain news articles becoming viral, relative to other news articles that do not become viral or so viral. Specifically, we focus on Arabic language news articles, as Arabic language articles have unique linguistic, cultural, and social constrains relative to most Western languages news stories. In order to understand virality, we take two approaches, a time series and linguistical, in an Arabic language data of more than 1,000 news articles with associated temporal traffic data. For data collection, we select (Kasra, “a breaking”) (http://kasra.co/) is an Arabic language online news site that targets Arabic language speakers worldwide, but particularly in the Middle East North Africa (MENA) region. We gathered more than 3,000 articles, originally, then gathered traffic data for this set of articles, reducing the set to more than 1,000 with complete traffic data. We focus first on the temporal attributes in order to categorize clusters of virality with this set of articles. Then, with topical analysis, we seek to identify linguistical aspects common to articles within each virality cluster identified by time series. Based on results from the time series analysis, we cluster articles based on common temporal characteristics of traffic access. Once clustered by time series, we analyze each cluster for content attributes, topical and linguistical, in order to identify specific attributes that may be causing the virality of articles within each times-series cluster. To compute dissimilarity for time-series, we utilize and evaluate the performance of several state-of-the-art time series dissimilarity-based clustering approaches, such as dynamic time warping, discrete wavelet transformation, and others. To identify the dissimilarity algorithm with the most discriminating power, we conduct a principal component analysis (PCA), which is a statistical technique used to highlight variations and patterns in a dataset. Based on findings from our PCA, we select discrete wavelet transformation-based dissimilarity as the best times-series algorithm for our research because the resulting principal axes explain more proportion of variability (75.43 percent) relative to the other time-series algorithms that we had employed. We identify five virality clusters using times series. For topic modeling, we employ Latent Dirichlet allocation (LDA) for this portion of the research. LDA is a generative probabilistic model for collections of discrete data, such as text, LDA explains similarities among groups of observations within a data set. For text modeling, the topic probabilities of LDA provide an explicit representation of a document. For the topical classification analysis, we use Linguistic Inquiry and Word Count (LIWC), which is a sentiment analysis tool. LIWC is a text processing program based on occurrences of words in several categories covering writing style and psychological meaning. Prior empirical work shows the value of a LIWC linguistic analysis for detecting meanings in various experimental settings, including attention focus, thinking style, and social relationships. In terms of results, surprising, the article topic is not predictive of virality of Arabic language news articles. Instead we find that linguistical aspects and style of the news article is the most predictive attribute for predicting virality for Arabic news articles. In analyzing the attributes of virality in Arabic language news articles, our research finds that, perhaps counter intuitively, the topic of the article does not impact the virality. Instead, we find that style of the article is the most impactful attribute for predicting virality for Arabic news articles. Building on these findings, we will leverage aspects of the news articles with other factors to develop tools to assist content creators to more effectively reach their user segment. Our research results will assist in understanding the virality of Arabic news and ultimately improve readership and dissemination of Arabic language news articles.
-
-
-
Efforts Towards Automatically Generating Personas in Real-time Using Actual User Data
Authors: Bernard J. Jansen, Jisun An, Haewoon Kwak and Hoyoun ChoThe use of personas is an interactive design technique with considerable potential for product and content development. A persona is a representation of a group or segment of users, sharing common behavioral characteristics. Although representing a segment of users, a persona is generally developed in the form of a detailed narrative about an explicit but fictitious individual that represents the collection of users possessing similar behaviors or characteristics. In order to make the fictitious individual appear as real person to the product developers, the persona narrative usually contains a variety of both demographic and behavioral details about socio economic status, gender, hobbies, family members, friends, possessions, among many other data. Also, the narrative of a persona normally also addresses the goals, needs, wants, frustrations and other emotional aspects of the fictitious individual that are pertinent to the product being designed. However, personas have typically been viewed as fairly static. In this research, we demonstrate an approach for creating and validating personas in real time, based on automated analysis of actual user data. Our data collection site and research partner is AJ+ (http://ajplus.net/), which is a news channel from Al Jazeera Media Network that is natively digital with a presence only on social media platforms and a mobile application. Its media concept is unique in that AJ+ was designed from the ground up to serve news in the medium of viewer, versus a teaser in one medium with a redirect to a website. In pursuit of our overall research objective of automatically generating personas in real time, for research reported in this manuscript, we are specifically interested in understanding the AJ+ audience by identifying (1) whom are they reaching (i.e., market segment) and (2) what competitive (i.e., non-AJ+) content are associated with each market segment. Focusing on one aspect of user behavior, we collect 8,065,350 instances of sharing of links by 54,892 users of an online news channel, specifically examining the domains these users share. We then cluster users based on similarity of domains shared, identifying seven personas based on this behavioral aspect. We conduct term-frequency – inverse document frequency (tf-idf) vectorization. We remove outliers of less than 5 shares (too unique) and more than 80% of the all users' shares (too popular). We use K-means++ clustering (K = 2.. 10), which is an advanced version of K-means to improve selection of initial seeds, because K-means++ effectively works for a very sparse matrix (user-link). We use the “elbow” method to choose the optimal number of clusters, which is eight in this case. In order to characterize each cluster, we list top 100 domains from each cluster and discover that there are large overlaps among clusters. We then remove from each cluster the domains that existed in another cluster in order to identify the relevant, unique, and impactful domains. This de-duplication results in the elimination of one cluster, leaving us with a set of clusters, where each cluster is characterized by domains that are shared only by users within that cluster. We note that the K-means++ clustering method can be replaced easily with other clustering methods in various situations. Demonstrating that these insights can be used to develop personas in real-time, the research results provide insights into competitive marketing, topic interests, and preferred system features for the users of the online news medium. Using the description of each of shared links, we detect their languages. 55.2% (30,294) users share links in one just language and 44.8% users share links in multiple languages. The most frequently used language is English (31.98%), followed by German (5.69%), Spanish (5.02%), French (4.75%), Italian (3.46%), Indonesian (2.99%), Portuguese (2.94%), Dutch (2.94%), Tagalog1 (2.71%), and Afrikaans (2.69%). As there were millions of domains shared, we utilize the top one hundred domains for each cluster, resulting in 700 top domains shared by the 54,892 AJ+ users. We, as mentioned, de-duplicated, resulting in the elimination of a cluster (11,011 users, 20.06%). So, we have seven unique clusters based on sharing of domains representing 43,881 users. We then demonstrate how these findings can be leveraged to generate real-time personas based on actual user data. We stream the data analyze results into a relational database, combine the results with other demographic data that we gleaned from available sources such as Facebook and other social media accounts, using each of the seven clusters as representative of a persona. We give each persona a fictional name and use a stock photo as the face of our personas. Each persona was linked to the top alternate (i.e., non-AJ+) domains they most commonly shared with the personas shared links updateable with new data. Research implications are that personas can be generated in real-time, instead of being the result of a laborious, time-consuming development process.
-
-
-
Creating Instructional Materials with Sign Language Graphics Through Technology
Authors: Abdelhadi Soudi and Corinne VinopolEducation of deaf children in the developing world is very dire and there is a dearth of sign language interpreters to assist them with translation for sign language-dependent students in the classrooms. Illiteracy within the deaf population is rampant. Over the past several years, a unique team of Moroccan and American deaf and hearing researchers have united to enhance the literacy of deaf students by creating tools that incorporate Moroccan Sign Language (MSL) and American Sign Language (ASL) under funding grants from USAID and the National Science Foundation (NSF). MSL is a gestural language distinct from both the spoken languages and written language of Morocco and has no text representation. Accordingly, translation is quite challenging and requires representation of MSL in graphics and video.
Many deaf and hard of hearing people do not have good facility with their native spoken language because they have no physiological access to it. Because oral languages depend, to a great extent, upon phonology, reading achievement of deaf children usually falls far short of that of hearing children of comparable abilities. And, by extension, reading instructional techniques that rely on phonological awareness, letter/sound relationships, and decoding, all skills proven essential for reading achievement, have no sensory relevance. Even in the USA, where statistics are available and education of the deaf is well advanced, on average, deaf high school graduates have a fourth grade reading level; only 7–10% of deaf students read beyond a seventh to eighth grade reading level; and approximately 20% of deaf students leave school with a second grade or below reading level (Gallaudet University's national achievement testing programs (1974, 1983, 1990, and 1996); Durnford, 2001; Braden, 1992; King & Quigley, 1985; Luckner, Sebald, Cooney, Young III, & Muir, 2006; Strong, & Prinz, 1997).
Because of spoken language inaccessibility, many deaf people rely on a sign language. Sign language is a visual/gestural language that is distinct from spoken Moroccan Arabic and Modern Standard/written Arabic and has no text representation. It can only be depicted via graphics, video, and animation.
In this presentation, we present an innovative technology Clip and Create, a tool for automatic creation of sign language supported instructional material. The technology has two tools– Custom Publishing and Instructional Activities Templates, and the following capabilities:
(1)Automatically constructs customizable publishing formats;
(2)Allows users to import Sign Language clip art and other graphics;
(3)Allows users to draw free-hand orusere-sizable shapes;
(4)Allows users to incorporate text, numbers, and scientific symbols in various sizes, fonts, and colors;
(5)Saves and prints published products;
(6)Focuses on core vocabulary, idioms, and STEM content;
(7)Incorporates interpretation of STEM symbols into ASL/MSL;
(8)Generates customizable and printable Instructional Activities that reinforce vocabulary and concepts found in instructional content using Templates:
a. Sign language BINGO cards,
b. Crossword puzzles,
c. Finger spelling/spelling scrambles,
d. Word searches (in finger spelling and text),
e. Flashcards (with sign, text, and concept graphic options), and
f. Matching games (i.e., Standard Arabic-to-MSL and English-to-ASL).
(cf. Figure 1: Screenshots from Clip and Create)
The ability of this tool to efficiently create bilingual (i.e., MSL and written Arabic and ASL and English) educational materials will have a profound positive impact on the quantity and quality of sign-supported curricular materials teachers and parents are able to create for young deaf students. And, as a consequence, deaf children will show improved vocabulary recognition, reading fluency, and comprehension.
A unique aspect of this software is that written Arabic is used by many Arab countries even though the spoken language varies. Though there are variations in signs as well, there is enough consistency to make this product useful in other Arab-speaking nations as is. Any signing differences can easily be adjusted by swapping sign graphic images.
-
-
-
A Distributed and Adaptive Graph Simulation System
Authors: Pooja Nilangekar and Mohammad HammoudLarge-scale graph processing is becoming central to our modern life. For instance, graph pattern matching (GPM) can be utilized to search and analyze social graphs, biological data and road networks, to mention a few. Conceptually, a GPM algorithm is typically defined in terms of subgraph isomorphism, whereby it seeks to find subgraphs in an input data graph, G, which are similar to a given query graph, Q. Although subgraph isomorphism forms a uniquely important class of graph queries, it is NP-complete and very restrictive in capturing sensible matches for emerging applications like software plagiarism detection, protein interaction networks, and intelligence analysis, among others. Consequently, GPM has been recently relaxed and defined in terms of graph simulation. As opposed to subgraph isomorphism, graph simulation can run in quadratic time, return more intuitive matches, and scale well with modern big graphs (i.e., graphs with billions of vertices and edges). Nonetheless, the current state-of-the-art distributed graph simulation systems still rely on graph partitioning (which is also NP-complete), induce significant communication overhead between worker machines to resolve local matches, and fail to adapt to various complexities of query graphs.
In this work, we observe that big graphs are not big data. That is, the largest big graph that we know of can still fit on a single physical or virtual disk (e.g., 6TB physical disks are cheaply available nowadays and AWS EC2 instances can offer up to 24 × 2048GB virtual disks). However, since graph simulation requires exploring the entire input big graph, G, and naturally lacks data locality, existing memory capacities can get significantly dwarfed by G's size. As such, we propose GraphSim, a novel distributed and adaptive system for efficient and scalable graph simulation. GraphSim precludes graph partitioning altogether, yet still exploits parallel processing across cluster machines. In particular, GraphSim stores G at each machine but only matches an interval of G's vertices at the machine. All machines are run in parallel and each machine simulates its interval locally. Nevertheless, if necessary, a machine can inspect remaining dependent vertices in G to fully resolve its local matches without communicating with any other machine. Hence, GraphSim does not shuffle intermediate data whatsoever. In addition, it attempts not to overwhelm the memory of any machine via employing a mathematical model to predict the best number of machines for any given query graph, Q, based on Q's complexity, G's size and the memory capacity of each machine. Subsequently, GraphSim renders adaptive as well. We experimentally verified the efficiency and the scalability of GraphSim over private and public clouds using real-life and synthetic big graphs. Results show that GraphSim can outperform the current fastest distributed graph simulation system by several orders of magnitude.
-
-
-
A BCI m-Learning System
Authors: AbdelGhani Karkar and Amr MohamedMobile learning can help in evolving students learning strength and comprehension skills. A connection is required to enable such devices communicate between each other. Brain-Computer Interface (BCI) can read brain signals and transform them into readable information. For instance, an instructor can use such device to track interest, stress level, and engagement of his students. We propose in this paper a mobile learning system that can transpose text-to-picture (TTP) to illustrate the content of Arabic stories and synchronize information with connected devices in a private wireless mesh network. The device of the instructor can connect the internet to download further illustrative information. It shares Wireless and Bluetooth connection with at least one student. Therefore, students can share the connection between each other to synchronize information on their devices. BCI devices are used to navigate, answer questions, and get to track students' performance. The aim of our educational system is to establish a private wireless mesh network that can operate in a dynamic environment.
Keywords
Mobile Learning, Arabic Text Processing, Brain Computer Interface, Engineering Education, Wireless Mesh Network.
I. Introduction
Nowadays mobile devices and collaborative work opened a new horizon for collaborative learning. As most people own handheld private portable smart phones, this has become the main means of connectivity and communication between people. Using smart phones for learning is beneficial and more attractive as learners can access educational resources at any time. Different eLearning systems available provide different options for collaborative classroom environment. However, they do not consider effective needs to adapt learning performance. They do not provide dynamic communication, automatic feedback, and other required classroom events.
II. Backgrounds
There are several collaborative learning applications have been proposed. BSCW [1] enables for online sharing of workspaces between distant people. Lotus Sametime Connect [2] also provides services for collaborative multiway chat, web conferencing, location awareness, and so. Saad et al. [3] proposed an intelligent collaborative system that can enable small range of mobile devices to communicate using WLAN, and uses the Bluetooth in case of power outage. The architecture of the proposed system is central where client can connect the server. Saleem [4] proposed a Bluetooth Assessment System (BAS) to use Bluetooth as an alternative to transfer questions and answers between the instructor and students. As many systems have been proposed in the domain of collaborative learning, several of them support mobile technology while others do not. BCI is not considered as part of the mobile educational system that can surround the environment with reading mental signals.
III. The proposed system
Our proposed system provides educational content and and synchronize it in a Wireless and Bluetooth wireless mesh network. It can be used in classrooms independent from the public network. The primary device broadcasts messages to enable users follow the explanation of the instructor on their mobile devices. The proposed system covers: 1) the establishment of wireless mesh network between mobile devices, 2) reading BCI data, 3) message communications, and 4) performance analysis between Wireless and Bluetooth technologies through device-to-device communication.
References
[1] K. Klö, “BSCW: cooperation support for distributed workgroups”, Parallel, Distributed and Network-based Processing. In 10th Euromicro Workshop, pp. 277–282, 2002.
[2] Lotus Sametime Connect. (2011, Feb. 17). Available: http://www.lotus.com/sametime.
[3] T. Saad, A. Waqas, K. Mudassir, A. Naeem, M. Aslam, S. Ayesha, A. Martinez-Enriquez, and P.R. Hedz, “Collaborative work in class rooms with handheld devices using bluetooth and WLAN,” IEEE 27th Canadian Conference on Electrical and Computer Engineering (CCECE), 2014, pp. 1–6.
[4] N.H., Saleem, “Applying Bluetooth as Novel Approach for Assessing Student Learning,” Asian Journal of Natural & Applied Sciences, vol 4, no. 2, 2015.
-
-
-
Qurb: Qatar Urban Analytics
Doha is one of the fastest growing cities of the world with a population that has increased by nearly 40% in the last five years. There are two significant trends that are relevant to our proposal. First, the government of Qatar is actively engaged in embracing the use of fine-grained data to “sense” the city for maintaining current services and future planning to ensure a high standard of living for its residents. In this line, QCRI has initiated several research projects related to urban computing to better understand and predict traffic mobility patterns in the city of Doha [1]. Second trend is the high degree of social media participation of the populace, providing a significant amount of time-oriented social sensing of the all types of events unfolding in the city. A key element of our vision is to integrate data from physical and social sensing, into what we call socio-physical sensing. Another key element of our vision is to develop novel analytics approaches to mine this cross-modal data to make various applications for residents smarter than they could be with a single mode of data. The overall goal is to help citizens in their every-day life in urban spaces, and also help transportation experts and policy specialists to take a real time data-driven approach towards urban planning and real time traffic planning in the city.
Fast growing cities like Doha encounter several problems and challenges that should be addressed in time to ensure a reasonable quality of life for its population. These challenges encompass good transportation networks, sustainable energy sources, acceptable commute times, etc. and go beyond physical data acquisition and analytics.
In the era of Internet of Things [5], it has become commonplace to deploy static and mobile physical sensors around the city in order to capture indicators about people's behaviour related to driving, polluting, energy consumption, etc. The data collected from physical as well as social sensors has to be processed using advanced exploratory data analysis, cleaned and consolidated to remove inconsistent, outlying and duplicate records before statistical analysis, data mining and predictive modeling can be applied.
Recent advances in social computing have enabled scientists to study and model different social phenomena using user generated content shared on social media platforms. Such studies include the spread of diseases on social media [3] and studying food consumption in Twitter [4]. We envision a three layered setting: the ground, physical sensing layer, and social sensing layer. The ground represents the actual world (e.g., a city) with its inherent complexity and set of challenges. We aim at solving some of these problems by combining two data overlays to better model the interactions between the city and its population.
QCRI vision is twofold:
From a data science perspective: Our goal is to take a holistic cross-modality view of urban data acquired from disparate urban/social sensors in order to (i) design an integrated data pipeline to store, process and consume heterogeneous urban data, and (ii) develop machine learning tools for cross-modality data mining which aids decision making for the smooth functioning of urban services;
From a social informatics perspective: Use social data generated by users and shared via social media platforms to enhance smart city applications. This could be achieved by adding a semantic overlay to data acquired through physical sensors. We believe that combining data from physical sensors with user generated content potentially leads to the design of better and smarter lifestyle applications such as “evening out experience” recommenders that optimize for the whole experience including driving, parking and restaurant quality; Cab finder that takes into account the current traffic status, etc.
Figure 1. Overview of Proposed Approach.
In Fig. 1 we provide a general overview of our cross-modality vision. While most of the effort toward building applications assisting people in their everyday life has focused on only one data overlay, we claim that combining the two overlays of data could generate a significant added value to applications on both sides.
References
[1] Chawla, S., Sarkar, S., Borge-Holthoefer, J., Ahamed, S., Hammady, H., Filali, F., Znaidi, W., “On Inferring the Time-Varying Traffic Connectivity Structures of an Urban Environment”, Proc. of the 4th International Workshop on Urban Computing (UrbComp 2015) in conjunction with KDD 2015, Sydney, Australia.
[2] Sagl, G., Resch, B., Blaschke, T., “Contextual Sensing: Integrating Contextual Information with Human and Technical Geo-Sensor Information for Smart Cities”. Sensors 2015, 15, 17013–17035.
[3] Sadilek, A., Kautz, H. A., Silenzio, V. “Modeling Spread of Disease from Social Interactions.” ICWSM. 2012.
[4] Sofiane Abbar, Yelena Mejova, and Ingmar Weber. 2015. You Tweet What You Eat: Studying Food Consumption Through Twitter. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ‘15). ACM, New York, NY, USA, 3197–3206.
[5] Atzori, L., Iera, A., Morabito, G. “The internet of things: A survey.” Computer networks 54.15 (2010): 2787–2805.
-
-
-
Detecting Chronic Kidney Disease Using Machine Learning
Authors: Manoj Reddy and John ChoMotivation Chronic kidney disease (CKD) refers to the loss of kidney functions over time which is primarily to filter blood. Based on its severity it can be classified into various stages with the later ones requiring regular dialysis or kidney transplant. Chronic kidney disease mostly affects patients suffering from the complications of diabetes or high blood pressure and hinders their ability to carry out day-to-day activities. In Qatar, due to the rapidly changing lifestyle there has been an increase in the number of patients suffering from CKD. According to Hamad Medical Corporation [2], about 13% of Qatar's population suffers from CKD, whereas the global prevalence is estimated to be around 8–16% [3]. CKD can be detected at an early stage and can help at-risk patients from a complete kidney failure by simple tests that involve measuring blood pressure, serum creatinine and urine albumin [1]. Our goal is to use machine learning techniques and build a classification model that can predict if an individual has CKD based on various parameters that measure health related metrics such as age, blood pressure, specific gravity etc. By doing so, we shall be able to understand the different signals that identify if a patient at risk of CKD and help them by referring to preventive measures.
Dataset Our dataset was obtained from the UCI Machine Learning repository, which contains about 400 individuals of which 250 had CKD and 150 did not. The dataset was obtained from a hospital in southern India over a period of two months. In total there are 24 fields, of which 11 are numeric and 13 are nominal i.e. can take on only one of many categorical values. Some of the numerical fields include: blood pressure, random blood glucose level, serum creatinine level, sodium and potassium in mEq/L. Similarly, examples of nominal fields are answers to yes/no type questions such as whether the patient suffers from hypertension, diabetes mellitus, coronary artery disease. There was missing data values in a few rows which was addressed by imputing them with the mean value of the respective column feature. This ensures that the information in the entire dataset is leveraged to generate a model that best explains the data.
Approach We use two different machine learning tasks to approach this problem, namely: classification and clustering. In classification we built a model that can accurately classify if a patient has CKD based on their health parameters. And in order to understand if people can be grouped together based on the presence of CKD we have performed clustering on this dataset. Both these approaches provide good insights into the patterns present in the underlying data. Classification This problem can be modeled as a classification task in machine learning where the two classes are: CKD and not CKD which represents if a person is suffering from chronic kidney disease or not respectively. Each person is represented as a set of features provided in the dataset described earlier. We also have ground truth as to if a patient has CKD or not, which can be used to train a model that learns how to distinguish between the two classes. Our training set consists of 75% of the data and the remaining 25% is used for testing. The ratio of CKD to non-CKD persons in the test dataset was maintained to be approximately the similar to the entire dataset to avoid the problems of skewness. Various classification algorithms were employed such as logistic regression, Support Vector Machine (SVM) with various kernels, decision trees and Ada boost so as to compare their performance. While training the model, a stratified K-fold cross validation was adopted which ensures that each fold has the same proportion of labeled classes. Each classifier has a different methodology for learning. Some classifiers assign weights to each input feature along with a threshold that determines the output and updates them accordingly based on the training data. In the case of SVM, kernels map input features into a different dimension which might be linearly separable. Decision tree classifiers have the advantage that it can be easily visualized since it is analogous to a set of rules that need to be applied to an input feature vector. Each classifier has a different generalization capability and the efficiency depends on the underlying training and test data. Our aim is to discover the performance of each classifier on this type of medical information.
Clustering Clustering involves organizing a set of items into groups based on a pre-defined similarity measure. This is an unsupervised learning method that doesn't use the labeled information. There are various popular clustering algorithms and we use k-means and hierarchical clustering to analyze our data. K-means involves specifying the number of classes and the initial class means which are set to random points in the data. We vary the number of groups from 2 to 5 to figure out which maximizes the quality of clustering. Clustering with more than 2 groups also might allow to quantify the severity of Chronic Kidney Disease (CKD) for each patient instead of the binary notion of just having CKD or not. In each iteration of k-means, each person is assigned to a nearest group mean based on the distance metric and then the mean of each group is calculated based on the updated assignment. After a few iterations, once the means converge the k-means is stopped. Hierarchical clustering follows another approach whereby initially each datapoint is an individual cluster by itself and then at every step the closest two clusters are combined together to form a bigger cluster. The distance metric used in both the methods of clustering is Euclidean distance. Hierarchical clustering doesn't require any assumption about the number of clusters since the resulting output is a tree-like structure that contains the clusters that were merged at every time-step. The clusters for a certain number of groups can be obtained by slicing the tree at the desired level. We evaluate the quality of the clustering based on a well known criteria known as purity. Purity measures the number of data points that were classified correctly based on the ground truth which is available to us [5].
Principal Component Analysis Principal Component Analysis (PCA) is a popular tool for dimensionality reduction. It reduces the number of dimensions of a vector by maximizing the eigenvectors of the covariance matrix. We carry out PCA before using K-Means and hierarchical clustering so as to reduce it's complexity as well as make it easier to visualize the cluster differences using a 2D plot.
Results Classification In total, 6 different classification algorithms were used to compare their results. They are: logistic regression, decision tree, SVM with a linear kernel, SVM with a RBF kernel, Random Forest Classifier and Adaboost. The last two classifiers fall under the category of ensemble methods. The benefit of using ensemble methods is that it aggregates multiple learning algorithms to produce one that performs in a more robust manner. The two types of ensemble learning methods used are: Averaging methods and Boosting methods [6].
The averaging method typically outputs the average of several learning algorithms and one such type we used is random forest classifier. On the other hand, a boosting method “combines several weak models to produce a powerful ensemble” [6]. Ada boost is an example of boosting method that we have used.
We found that the SVM with linear kernel performed the best with 98% accuracy in the prediction of labels in the test data. The next best performance was by the two ensemble methods: Random Forest Classifier with 96% and Adaboost 95% accuracy. The next two classifiers were: Logistic regression with 91% and Decision tree with 90%. The classifier with the least accuracy was SVM with a RBF kernel which has about 60% accuracy. We believe that RBF gave lower performance because the input features are already high dimensional and don't need to be mapped into a higher dimensional space by RBF or other non-linear kernels. A Receiver Operating Characteristic (ROC) curve can also be plotted to compare the true positive rate and false positive rate. We also plan to compute other evaluation metrics such as precision, recall and F-score. The results are promising as majority of the classifiers have a classification accuracy of above 90%.
After classifying the test dataset, feature analysis was performed to compare the importance of each feature. The most important features across the classifiers were: albumin level and serum creatinine. Logistic regression classifier also included the ‘pedal edema’ feature along with the previous two features mentioned. Red blood cell feature was included as an important feature by Decision tree and Adaboost classifier.
Clustering After performing clustering on the entire dataset using K-Means we were able to plot it on a 2D graph since we used PCA to reduce it to two dimensions. The purity score of our clustering is 0.62. A higher purity score (max value is 1.0) represents a better quality of clustering. The hierarchical clustering plot provides the flexibility to view more than 2 clusters since there might be gradients in the severity of CKD among patients rather than the simple binary representation of having CKD or not. Multiple clusters can be obtained by intersecting the hierarchical tree at the desired level.
Conclusions We currently live in the big data era. There is an enormous amount of data being generated from various sources across all domains. Some of them include DNA sequence data, ubiquitous sensors, MRI/CAT scans, astronomical images etc. The challenge now is being able to extract useful information and create knowledge using innovative techniques to efficiently process the data. Due to this data deluge phenomenon, machine learning and data mining have gained strong interest among the research community. Statistical analysis on healthcare data has been gaining momentum since it has the potential to provide insights that are not obvious and can foster breakthroughs in this area.
This work aims to combine work in the field of computer science and health by applying techniques from statistical machine learning to health care data. Chronic kidney disease (CKD) affects a sizable percentage of the world's population. If detected early, its adverse effects can be avoided, hence saving precious lives and reducing cost. We have been able to build a model based on labeled data that accurately predicts if a patient suffers from chronic kidney disease based on their personal characteristics.
Our future work would be to include a larger dataset consisting of of thousands of patients and a richer set of features that shall improve the richness of the model by capturing a higher variation. We also aim to use topic models such as Latent Dirichlet Allocation to group various medical features into topics so as to understand the interaction between them. There needs to be a greater encouragement for such inter-disciplinary work in order to tackle grand challenges and in this case realize the vision of evidence based healthcare and personalized medicine.
References
[1] https://www.kidney.org/kidneydisease/aboutckd
[3] http://www.ncbi.nlm.nih.gov/pubmed/23727169
[4] https://archive.ics.uci.edu/ml/datasets/Chronic_Kidney_Disease
[5] http://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-clustering-1.html
-
-
-
Legal Issues in E-Commerce: A Case Study of Malaysia
More LessElectronic Commerce is the process of buying, selling, tranferring or exchanging products, services and/or information via computer network, including the Internet. In e-commerce environment, just like traditional paper-based commercial transaction, seller present their products, prices and terms to potential buyers. The buyers will consider their options, negotiate prices and terms if (necessary), place order and make payment. E-commerce is growing at a significant rate all over the world due to its efficient business transaction. Despite the fact of the development of e-commerce, there is uncertainty whether the traditional principles of contract law applicable to electronic contract. In formation of e-contract, the parties might disagree to what point and in which country an e-contract is formed. Malaysia as well as other country has enacted legislation on e-commerce in compliance with international organization i.e United Nations Commission on International Trade Law (UNCITRAL). The aim and objective of this paper is to identify the adequacy of existing legislation in Malaysia on e-commerce. This paper will also examines the creation of legally enforceable agreement with regard to e-commerce in Malaysia, digital signature and the uncertainty of where and when the e-contract is formed.
-
-
-
Latest Trends in Twitter from Arab Countries and the World
Authors: Wafa Waheeda Syed and Abdelkader Lattab1. Introduction
Twitter is the micro blogging social media platform which has at most variety of content. The open access to Twitter data with the usage of Twitter APIs has made it a important area of research. Twitter has a useful feature called “Trends” which displays Hot topics or trending information differing for every location. This trending information is evolved by tweets being shared on Twitter in a particular location. But Twitter limits the trending information to current tweets, as the algorithm for finding trends is concentrated on generating trends in real time, rather than trends summarization of hot topics on daily basis. Thus a clear summarization of the contemporary trending information is missing and is much needed. Latest Twitter Trends - our application discussed in this paper, is built to get the aggregate of hot topics on Twitter for Arab Countries and the World. This is a real time application with summarization of Hot topics over time. It enables users to study the summarization of twitter trends by location with the help of a Word Cloud. The tool also enables the user to click on the particular trend, which will allow the user to navigate and search through Twitter Search - also in real time. This tool also overcomes a drawback of Twitter trending information, in addition to the Twitter trend algorithm. The trends differ for different languages in different locations and are often mixed. For eg, if #Eid-ul-Adha is trending in Arab countries, عيد الأضحى# is also trending. This application focuses on consolidating the trends in Arabic and English, which have the same meaning and display only one trending topic, instead of two same topics in different language. This application also gives an estimation of the different kind of Twitter users, analyzing the percentage of tweets made by Male and Female in that location.
2. Trends data gathering
Twitter APIs give developers access to the real time data comprising of tweets & trends. The Twitter REST API is used by the tool - Latest Twitter Trends, to connect and get trending data from Twitter. The API is used to authenticate and establish connection with Twitter and also returns Twitter trending data in JSON format. Python programming language is used to write scripts to gather data from Twitter. A Data Crawling Script is developed for connecting with Twitter API by authenticating the credentials generated by twitter on creating an application from the app.twitter.com. The Customer Key, Customer Secret, Access Token, Access Token Secret are the credentials used to perform authentication by Twitter. The data returned by Twitter is in JSON (JavaScript Object Notation) format and the Python Data Crawling Script is commanded to handle the JSON files and create a CSV database. This High Level Gathering of Data comprises of the following: Python data crawling script connects and authenticates with Twitter API and gets trending places data in JSON format from Twitter. The data in JSON format is stored in to our tool database as a CSV file. The Twitter data gathered is all the trending location/places with the WOIED (Where On Earth ID). The WOEID is used as a key to get Twitter trending topics location by location - in real time using the Twitter REST API. The trends for every location are also returned to the tool in JSON format, which is again changed converted to CSV for saving in the tool database. This CSV file for Trends is appended every time a new trending data is collected from twitter. Another CSV file is maintained in the Database which holds only the current information for all trending places - for later use. Natural language processing is done on trends by location CSV data, dictionary, to consolidate and consider Arabic and English Trending topics as one. The results are stored in CSV file and will be used for the hot topic identification.
3. Hot topic identification
After the High level Data Gathering, CSV files containing data are used as a Database for generating Word Cloud using D3.js. This trending data is processed by calculating the number of occurrences to give an estimate of which trending topic was trending for a long time. The frequency is taken as the count value for trending topic and a word cloud is generated. This algorithm for calculating frequency is a python script, written mainly for Word Cloud Data Crawling. This word cloud data crawling script takes the Trends by Location data as input and generates a huge database of trends by cities in JSON files. This word cloud crawling script gives output in the JSON files to be stored with key as the trend topic and value as the frequency of the trend occurrence.
4. Architecture
Figure 1: Latest Trends in Twitter Application Architecture The python scripts for data crawling and word cloud crawling are sued to connect with Twitter, gather data, process and store in a database. The D3.js and Google fusion table API are used for displaying the application results. Google Fusion Table API is used to create a Map containing current trends by location – geo-tagged on the map. Java program is used as a dedicated project to connect & authenticate with Google API and clear old fusion table data to import new updated rows in to the Google Fusion Table. Python script Tagcloud.py is used to generate cities. JSON with trending topics from the Trends.csv file. These files from the database for generating word cloud using D3.js, individually for every city/location. Fusion table is used to visualize the trending information from Twitter. A java program along with Google API is used to authenticate and connect. Also to delete previous information in fusion table and update/import new records of data.
5. Results
The data crawling script establishes connection with Twitter and returns a JSON format as in Fig. 2. This data is processed and saved as a CSV in to our application database for later use. Figure 2: Trends Data Output from Twitter in JSON format The word cloud crawling script generates key value pairs of processed trending data from the database. The key containing trending topic and the value containing the frequency of the trending topic's occurrence. The Fig. 3 displays the JSON dataset used for generating word cloud. Figure 3: JSON data of the processed trending data The word cloud is generated using the D3.js library and is used to display summarized trending data to the user. Figure 4 shows the word cloud result for London country. Figure 4: Word cloud for trending data.
-
-
-
Mitigation of Traffic Congestion Using Ramp Metering on Doha Expressway
Authors: Muhammad Asif Khan, Ridha Hamila and Khaled Salah ShaabanRamp metering is the most effective and widely implemented strategy for improving traffic flow on freeways by restricting the number of vehicles entering a freeway using ramp meters. A ramp meter is a traffic signal programmed with a much shorter cycle time in order to allow a single vehicle or a very small platoon of vehicles (usually two or three) per green phase. Ramp metering algorithms defines the underlying logic that calculate the metering rate. Ramp meters are usually employed to control vehicles at the on-ramp to enter freeway (mainline) to mitigate the impact of the ramp traffic on the mainline flow. However ramp meters can also be used to control traffic flow from freeway to freeway. The selection of appropriate ramp metering strategy is based on the needs and goals of the regional transportation agency. Ramp meters can be controlled either locally (isolated) or system-wide (coordinated). Locally controlled or isolated ramp meters control vehicles access based on the local traffic conditions on single ramp or freeway segment to reduce congestion locally near the local ramp. System-wide or coordinated controlled ramp meters are used to improve traffic conditions on a freeway segment or the entire freeway corridor. Ramp Meters can be programmed either fixed time or traffic responsive. Fixed metering uses pre-set metering rates with a defined schedule based on some historical traffic data. Fixed or pre-time metering addresses the recurring congestion problem, but fails in case of non-recurring congestion. Traffic responsive metering uses present traffic conditions to adjust its metering rate. Traffic data is collected using loop detectors or any other surveillance system on real time. Traffic responsive control can be implemented in both isolated and coordinated ramp meters. Some known traffic responsive algorithms include Asservissement Linéaire d'Entrée Auotroutière (ALINEA), Heuristic Ramp Metering Coordination (HERO), System Wide Adaptive Ramp Metering (SWARM), fuzzy logic, Stratified zone algorithm, Bottleneck algorithm, Zone algorithm, HELPER algorithm and Advanced Real Time Metering (ARM algorithm). These various algorithms are developed in various regions of the world and some of them are evaluated for quite a long period of time. However the difference in traffic parameters, driver's behaviors, road geometries and other parameters can affect the performance of the algorithm when implemented in a new location. Hence it is necessary to investigate the performance of the ramp metering strategy prior to physical deployment. In this work, we chose Doha Expressway to deploy ramp metering for improvement in traffic conditions. Doha Expressway is a six-lane highway in Qatar that link the North of Doha to the South. The highway can be accessed through several on-ramps at different locations. The merging of ramp traffic to the freeway often causes congestion on the highway in several ways. It increases traffic density on the highway, reduce vehicles speed and causes vehicles to change lanes in the merging area. Hence in this research we first investigated the impact of ramp traffic on the mainline flow and identified the potential bottlenecks. Then ramp meters were installed at some of the on-ramps to evaluate the performance of each to improve the traffic flow on the mainline. The outcome of this study is to select the optimum metering strategy for each on-ramp with proposed modifications if required. Extensive simulations are carried out in PTV VISSIM traffic micro simulation software. The simulator is calibrated using real time traffic data and geometrical information.
-
-
-
Distributed Multi-Objective Resource Optimization for Mobile-Health Systems
Authors: Alaa Awad Abdellatif and Amr MohamedMobile-health (m-health) systems leverage wireless and mobile communication technologies to promote new ways to acquire, process, transport, and secure the raw and processed medical data. M-health systems provide the scalability needed to cope with the increasing number of elderly and chronic disease patients requiring constant monitoring. However, the design and operation of such pervasive health monitoring systems with Body Area Sensor Networks (BASNs) is challenging in twofold. First, limited energy, computational and storage resources of the sensor nodes. Second, need to guarantee application level Quality of Service (QoS). In this paper, we integrate wireless network components, and application-layer characteristics to provide sustainable, energy-efficient and high-quality services for m-health systems. In particularly, we propose an Energy-Cost-Distortion (E-C-D) solution, which exploits the benefits of medical data adaptation to optimize transmission energy consumption and cost of using network services. However, at large scale networks and due to heterogeneity of wireless m-health systems, centralized approach becomes less efficient. Therefore, we present a distributed cross-layer solution, which is suitable for heterogeneous wireless m-health systems and scalable with the network size. Our scheme leverages Lagrangian duality theory and enables us to find efficient trade-off among energy consumption, network cost, and vital signs distortion, for delay sensitive transmission of medical data over heterogeneous wireless environment. In this context, we propose a solution that enables energy-efficient high-quality patient health monitoring to facilitate remote chronic disease management. We propose a multi-objective optimization problem that targets different QoS metrics, namely, signal distortion, delay, and Bit Error Rate (BER), as well as monetary cost and transmission energy. In particularly, we aim to achieve the optimal trade-off among the above factors, which exhibit conflicting trends.
The main contributions of our work can be summarized as follows:
- (1) We design a system for EEG health monitoring systems that achieves high performance by properly combining network functionalities and EEG application characteristics.
- (2) We formulate a cross-layer multi-objective optimization model that aims at adapting and minimizing, at each PDA, the encoding distortion and monetary cost at the application layer, as well as the transmission energy at the physical layer, while meeting the delay and BER constraints.
- (3) We use geometric program transformation to convert the aforementioned problem into a convex problem, for which an optimal, centralized solution can be obtained.
- (4) By leveraging Lagrangian duality theory, we then propose a scalable distributed solution. The dual decomposition approach enables us to decouple the problem into a set of sub-problems that can be solved locally, leading to a scalable distributed algorithm that converges to the optimal solution.
- (5) The proposed distributed algorithm for EEG based m-health systems is analyzed and compared to the centralized approach.
Our results show the efficiency of our distributed solution, its ability to converge to the optimal solution and to adapt to varying network conditions. In particular, simulation results show that the proposed scheme achieves the optimal trade-off between energy efficiency and QoS requirements of health monitoring systems. Moreover, it offers significantly savings in the objective function (i.e., E-C-D utility function), compared to solutions based on equal bandwidth allocation.
-
-
-
Multimodal Interface Design for Ultrasound Machines
Authors: Yasmin Halwani, Tim S.E. Salcudean and Sidney S. FelsSonographers, radiologists and surgeons use ultrasound machines on a daily basis to acquire images for interventional procedures, scanning and diagnosis. The current interaction with ultrasound machines relies completely on physical keys and touch-screen input. In addition to not having a sterile interface for interventional procedures and operations, using the ultrasound machine requires a physical nearby presence of the clinician to use the keys on the machine, which restricts the clinician's free movement and natural postures to apply the probe on the patient and often forces uncomfortable ergonomics for prolonged periods of time. According to surveys being continuously conducted on the incidence of work-related musculoskeletal disorders (WRMSDs) for the past decade, up to 90% of sonographers experience WRMSDs across North America during routine ultrasonography. Repetitive motions and prolonged static postures are among the risk factors of WRMSDs, which both can be significantly reduced by introducing an improved interface for ultrasound machines that does not completely rely on direct physical interaction. Furthermore, the majority of physicians who perform ultrasound-guided interventions hold the probe with one hand while inserting a needle with the other, which makes ultrasound machine parameters adjustment unreachable without external assistance. Similarly, surgeons' hands are typically occupied with sterile surgical tools and are unable to control ultrasound machine parameters independently. The need for an assistant is suboptimal as it sometimes difficult for the operator or surgeon to communicate a specific intent during a procedure. Introducing a multimodal interface for ultrasound machine parameters that improves the current interface and is capable of hands-free interaction can bring an unprecedented benefit to all types of clinicians who use ultrasound machines, as it will contribute in reducing strain-related injuries and cognitive load experienced by sonographers, radiologists and surgeons and introduce a more effective, natural and efficient interface.
Due to the need for sterile, improved and efficient interaction and the availability of low-cost hardware, multimodal interaction with medical imaging tools is an active research area. There have been numerous studies that explored speech, vision, touch and gesture recognition to interact with both pre-operative and interventional image parameters during interventional procedures or during surgical operations. However, research that target multimodal interaction with ultrasound machines has not been sufficiently explored and is mostly limited to augmenting one interaction modality at a time, such as the existing commercial software and patents on enabling ultrasound machines with speech recognition. Given the wide range of settings and menu navigation required for ultrasound image acquisition, there is potential improvement in the interaction by expanding the existing physical interface with hands-free interaction methods such as voice, gesture, and eye-gaze recognition. Namely, it will simplify the image settings menu navigation required to complete a scanning task by the system's ability to recognize the user's context through the additional interaction modalities. In addition, the user will not be restricted by a physical interface and will be able to interact with the ultrasound machines completely hands-free using the added interaction modalities, as explained earlier in the case of sterile environments in interventional procedures.
Field studies and interviews with sonographers and radiologists have been conducted to explore the potential areas of improvement of current ultrasound systems. Typical ultrasound machines used by sonographers for routine ultrasonography tend to have an extensive physical interface with keys and switches all co-located in the same area as the keyboard for all possible ultrasonography contexts. Although the keys are distributed based on their typical frequency of use in common ultrasonography exams, sonographers tend to glance at the keys repeatedly during a routine ultrasound session, which takes away from their uninterrupted focus on the image. Although it varies based on the type of the ultrasound exam, typically an ultrasound exam takes an average of 30 minutes, requiring a capture of multiple images. For time-sensitive tasks, such as imaging anatomical structures in constant motion, the coordination between the image, keys selection, menu navigation and probe positioning can be both time-consuming and distracting. Interviewed sonographers also reported their discomfort with repeated awkward postures and their preference for a hands-free interface in cases where they have to position the ultrasound probe at a faraway distance from where the ultrasound physical control keys are located, as in the case with immobile patients or patients with high BMI.
Currently, there exist available commercial software that addresses the repeated physical keystrokes issue and the need for a hands-free interface. Some machines provide a context-aware solution in a form of customizable software to automate steps in ultrasound exams, which reported to have significantly decreased keystrokes by 60% and exam time by 54%. Other machines provide voice-enabled interaction with ultrasound machines to reduce uncomfortable postures by sonographers trying to position the probe and reach for the physical keys on the machine. Interviewed sonographers frequently used the context-aware automated interaction software system with ultrasound machines during their ultrasound exams, which shows a potential for the context-aware feature that multimodal interaction systems can offer. On the other hand, sonographers did not prefer using voice commands as a primary interaction modality with ultrasound machines in addition to the existing physical controls, as an ultrasound exam involves a lot of communication with the patient and relying on voice input might cause misinterpretations of sonographer-patient conversations to be commands directed to the machine instead. This also leads to the conclusion that there is a need for voice-enabled systems augmented with other interaction modalities to be efficiently used when needed and not be confused with external voice interaction.
This study aims to explore interfaces for controlling ultrasound machine settings during routine ultrasonography and interventional procedures through multi-modal input. The main goal is to design an efficient timesaving and cost-effective system that minimizes the amount of repetitive physical interaction with the ultrasound machine in addition to providing a hand-free mode to reduce WRMSDs and allow direct interaction with the machine in sterile conditions. Achieving that will be done through additional field studies and prototyping followed by user studies to assess the developed system.
-
-
-
Pokerface: The Word-Emotion Detector
Authors: Alaa Khader, Ashwini Kamath, Harsh Sharma, Irina Temnikova, Ferda Ofli and Francisco GuzmánEvery day, humans interact with text spanning from different sources such as news, literature, education, and even social media. While reading, humans process text word by word, accessing the meaning of a particular word from the lexicon, and when needed, changing its meaning to match the context of the text (Harley, 2014). The process of reading can induce a range of emotions, such as engagement, confusion, frustration, surprise or happiness. For example, when readers come across unfamiliar jargon, this may confuse them, as they try to understand the text.
In the past, scientists have addressed the emotion in text from a writer's perspective. For example the field of Sentiment Analysis, aims to detect the emotional charge of words, to infer the intentions of the writer. However, here we propose the reverse approach: detect emotions produced on readers while processing text.
Detecting which emotions are induced by reading a piece of text can give us insights about the nature of the text itself. A word-emotion detector can be used to assign specific emotions experienced by readers to specific words or passages of text. This area of research has never been explored before.
There are many potential applications to a word-emotion detector. For example, a word-emotion detector can be used to analyze how passages in books, news or social media are perceived by readers. This can guide stylistic choices to cater for a particular audience. In a learning environment, it can be used to detect the affective states and emotions of students, so as to infer their level of understanding which can be used to provide assistance to students over difficult passages. In a commercial environment, it can be used to detect reactions to wording in advertizements. In the remainder of this report, we detail the first steps we followed to build a word-emotion detector. Moreover, we present the details of our system developed during QCRI's 2015 Hot Summer Cool Research internship program, as well as the initial experiments. In particular, we describe our experimental setup in which viewers watch a foreign language video with modified subtitles containing deliberate emotion inducing changes. We analyze the results and provide discussion about the future work.
The Pokerface System
A Pokerface is an inscrutable face that reveals no hint of a person's thoughts or feelings. The goal of the ‘Pokerface’ project is to build a word-emotion detector that works even if no facial movements are present. To do so, the Pokerface system uses a unique symbiose of the latest consumer-level technologies such as: eye-tracking to detect words that are being read; electroencephalography (EEG) to detect brain activity of the reader; and facial-expression recognition (FER) to detect movement in a reader's face. We then classify the brain activity and facial movements detected into emotions using Neural Networks.
In this report, we present the details of our Pokerface system, as well as the initial experiments done during QCRI's 2015 Hot Summer Cool Research internship program. In particular, we describe the setup in which viewers watch a foreign language video with subtitles containing deliberate emotion inducing changes.
Methodology
To detect emotions experienced by readers as they read text, we used different technologies. FER and EEG are used to detect emotional reactions through changes in facial expressions and brainwaves, while eye-tracking is used to identify the stimulus (text) to the reaction detected. A video interface was created to run the experiments. Below we describe each of them independently, and how we used them in the project.
EEG
EEG is the recording of electrical activity along the scalp (Niedermeyer and Lopes da Silva, 2005). EEG measures voltage fluctuations resulting from ionic current flows within the neurons of the brain. EEG is one of the few non-intrusive techniques available that provides a window on physiological brain activity. EEG averages the response from many neurons as they communicate, measuring the electrical activity by surface electrodes. We can then use the brain activity of a user to detect their emotional status.
Data Gathering
In our experiments, we used the Emotiv | EEG EPOC neuroheadset (2013), which has 14 EEG channels plus two references, inertial sensors, and two gyroscopes. The raw data from the neuroheadset was parsed with the timestamps for each sample.
Data Cleaning and Artifact Removal
After retrieving the data from the EEG, we need to remove “artifacts” which are changes in the signals that do not originate from neurons (Vidal, 1977), such as ocular movements, muscular movements, as well as technical noise. To do so, we used the open source toolbox EEGlab (Delorme & Makeig, 2004) to remove artifacts and filtering (removing the 4–45 Hz line noise).
ERP Collection
We decided to consider remaining artifacts as random noise and move forward with extracting Event Related Potentials (ERPs), since all of other options we had found required some level of manual intervention. ERPs are the relevant sections of our EEG data with regards to stimuli and the subjects' reaction time. To account for random effects form the artifacts, we averaged the ERPs over different users and events. To do so, we used EEGlab's plugin ERPlab, to add events codes to our continuous EEG data based on stimulus time.
Events
Our events were defined as textual modifications in subtitles, designed to induce emotions of confusion, frustration or surprise. The time at which the subject looks at a word was marked to be the stimulus time (st) for that word, and the reaction time was marked to be st+800 ms, because we rarely see a reaction to a stimulus 800 ms after its appearance (Fischler and Bradley, 2006).
The ERPs were obtained as the average of different events corresponding to the same condition (control or experimental).
Eye-Tracking
An eye-tracker is an instrument to detect the movements of the eye. Based on the nature of the eye and human vision, the eye-tracker identifies where a user is looking by shining a light that will be reflected into the eye, such that the reflection will be captured by image sensors. The eye-tracker will then measure the angle between the cornea and pupil reflections to calculate a vector and identify the direction of the gaze.
In this project, we used the EyeTribe eye-tracker to identify the words a reader looked at while reading. It was set up in a Windows machine. Before an experiment, the user needs to calibrate the eye-tracker. Recalibration is necessary every time the user changes their sitting position. While the eye-tracker is running, Javascript and NodeJS were used to create a function that extracts and parses the data from the machine and prints it into a text file. This data includes the screen's x and y coordinates of the gaze; the timestamp; and an indicator of whether the gaze point is a fixation or not. The data is received at a rate of 60fps. The gaze points are used to determine which words are looked at at any specific time.
Video Interface
In our experiments, each user was presented with a video with subtitles. To create the experimental interface, we made different design choices based on previous empirical research. Therefore, we used Helvetica font, given its consistency across all platforms (Falconer, 2011), and used the font size 26 given that it improves the readability of subtitles on large desktops (Franz, 2014). We used Javascript to detect the location of each word that was displayed on the screen.
After gathering the data the experiment, we used an off-line process to detect the “collisions” between the eye-tracker gaze points and the words displayed to the user. To do so, we used both time information and coordinate information. The result was a series of words annotated with the specific time-spans in which they were looked at.
FER
Facial Expression Recognition (FER) is the process of detecting an individual's emotion by accessing their facial expressions in an image or video. In the past, FER has been used for various purposes, including psychological studies, tiredness detection, facial animation and robotics, etc.
Data Gathering
We used the Microsoft Kinect with the Kinect SDK 2.0 for capturing the individual's face. The data extracted from the Kinect provided us with color and infrared images, as well as depth data. However, for this project we only worked with the color data. The data from the Kinect was saved as a sequence of color images, recorded at a rate of 30 frames per second (fps). The code made use of the process of multithreading to ensure high fps, and low memory usage. Each image frame was assigned a timestamp in milliseconds, which was saved in a text file.
Feature Extraction
After having extracted the data from the Kinect, the images were processed so as to locate the facial landmarks. We tested the images with Face++ which is a free API for face detection, recognition and analysis. Using Face++, we were able to locate 83 facial landmarks on the images. The data obtained from the API was the name of the landmark along with it's x and y-coordinates.
The next step involved obtaining Action Units (AUs) by using the facial landmarks located through Face++. Action Units are the actions of individual muscles or groups of muscles, such as, raising the outer eyebrow, or stretching of lips etc (Cohn et al. 2001). To determine which AUs to use for FER, as well as how to calculate them, Tekalp and Ostermann's (2000) was taken as a reference.
Classification
The final step of the process was classifying the image frames into one of the eight different emotions - happiness, sadness, fear, anger, disgust, surprise, neutral and confused. We used the MATLAB Neural Network toolkit (MathWorks, Inc., 2015) and designed a simple feed-forward neural network with backpropagation.
Pilot results
EEG
In our pilot classification study, we used we experimented with the Alcoholism data used in Kuncheva and Rodriguez (2012) paper, from the UC Irvine (UCI) machine learning repository (2010) which contains ERP raw data for 10 alcoholic subjects and 10 sober subjects. We extracted the features using the interval feature extraction. Total of 96K Features were extracted from each subject's data. We Achieved around 98% accuracy with the training data.
FER
We experimented on three different individuals as well as several images from the Cohn-Kanade facial expressions database. It was found that the application had roughly 75–80% accuracy, and this accuracy could further be improved by adding more data into the training set. It was also observed that the classifier was more accurate when classifying certain emotions than some others. For example, the images depicting happiness were classified accurately more than images that depicted any other emotion. The classifier had difficulty distinguishing between fear, anger and sadness.
Conclusion
In this paper, we presented Pokerface, a word-emotion detector that can detect the emotion of users as they read text. We built a video interface that displays subtitled videos and used the EyeTribe eye-tracker to identify the word a user is looking at a certain time in the subtitles. We used the Emotiv Epoc headset to obtain EEG brainwaves from the user, and the Microsoft Kinect to obtain their facial expressions, and extracted features from both. We used Neural Networks to classify both the facial expressions and the EEG brainwaves into emotions. In the future.
Future directions of work include to improve the accuracy of the FER and EEG emotion classification components. Furthermore, the EEG results can be improved by exploring additional artifact detection and removal techniques. Furthermore, we want to integrate the whole pipeline in a seamless application that allows effortless experimentation.
Once the setup is streamlined, Pokerface can be used to explore many different applications to optimize users' experiences in education, news, advertizing, etc. For example, the word-emotion detector can be utilized in Computer-assisted learning to provide students with virtual affective support, such as detecting confusion and providing clarifications.
-
-
-
The Secure Degrees of Freedom of the MIMO BC and MIMO MAC with Multiple Unknown Eavesdroppers
Authors: Mohamed Khalil, Tamer Khattab, Tarek Elfouly and Amr MohamedWe investigate the secure degrees of freedom (SDoF) of a two-transmitter Gaussian multiple access channel with multiple antennas at the transmitters, a legitimate receiver and an unknown number of eavesdroppers each with a number of antennas less than or equal to a known value NE. The channel matrices between the legitimate transmitters and the receiver are available everywhere, while the legitimate pair have no information about the fading eavesdroppers’ channels. We provide the exact sum SDoF for the considered system. We show that it is equal to min(M 1 + M 2 − NE, 1/2(max(M 1;N 2 + max(M 2;N) − NE,N). A new comprehensive upperbound is deduced and a new achievable scheme based on utilizing jamming is exploited. We prove that Cooperative Jamming is SDoF optimal even without the eavesdropper CSI available at the transmitters.
-
-
-
On Practical Device-2-device Communication in The Internet of Things
More LessDevice-to-device (D2D) communication, or the establishment of direct communication between nearby devices, is an exciting and innovative feature of the next-generation networks. D2D has specially attracted the attention of the research community because it is a key ingredient in creating Internet of Things (IoT): a network of physical objects that are embedded with sensors to collect data about the world and networking interfaces to exchange this data. IoT can serve a wide variety of application domains ranging from health and education to green computing and property management. Fortunately, processors, communication modules and most electronic components are diminishing in size and price, which allows us to integrate them into more objects and systems.
Today, companies such as Intel and Cisco are trying to make general purpose IoT devices available to everyday developers, thus expediting the rate at which the world is adopting IoT. Many of those devices are becoming more affordable and the internet is swamped with tutorials on how to build simple systems with them. However, despite the fundamental importance of effective D2D communication in IoT, we have a shortage of work that goes beyond standards being developed to accurately assess practical D2D communication performance of such devices.
We address this gap by studying different communication metrics on representative general purpose IoT devices and examining their impact in practical settings. We share our experiences assessing the performance of different communication technologies and protocol stacks on these devices in different environments. Additionally, we utilize this knowledge to improve “UpnAway”, an agile UAV Cyber-Physical-System Testbed developed at CMU Qatar, by enhancing communication between IoT devices attached to the drones for on-board processing and autonomous navigation control.
Measuring D2D Performance in IoT devices
We investigated the performance of Intel Edison (Fig. 1, a) and Raspberry-Pi (Fig. 1, b) devices because they are the two most representative general purpose IoT devices. The Intel Edison devices are equipped with a dual-core CPU, single-core microcontroller, integrated Wi-Fi, Bluetooth 4.0 support, 1 GB DDR and 4GB Flash memory and 40 multiplexed GPIO interfaces. Being so highly equipped, they are being rapidly adopted by a substantial segment of IoT developers worldwide. The Raspberry-Pi devices are the currently most widely used devices, so we used them for results comparison and validation. We made our measurements on D2D WiFi and Bluetooth links because, on both the Edison and Raspberry-Pi devices, they are the most user friendly communication interfaces. The Intel Edison devices come with built-in Wifi and Bluetooth interfaces that can be controlled through friendly GUI and the Raspberry-Pi devices can support these technology by simply plugging the corresponding USB dongles.
Our investigations involved accurately measuring RTT, Throughput, Signal Strength and reliability. The experiments involved:
Sending time-stamped packets to an echo-server then using the echo to calculate the round-trip-time (RTT), Exchanging files multiple times then calculating average time-delay over different distances (Fig. 2), (Throughput), Increasing distance between nodes and reading signal strength (Signal Strength), Transferring large files repeatedly to test reliability of different protocols (reliability).
In addition to the data collected, these investigations helped us make some unsettling observations about the Intel's Edison devices. First, we found a bug in configuring BlueZ, the linux implementation of the Bluetooth stack, over Yocto, the operating system running on the Edison devices: the RFCOMM, a reliable protocol by specification, link was unable to catch transmission errors. We suggest using TCP/IP over a BNEP bluetooth connection as a reliable alternative for RFCOMM when using the Edison Devices. Second, we observed that the RTT of WiFi D2D connection between Edison devices was significantly higher than that between the Raspberry-Pi. We suspect that this is attributed to the Edison's energy saving feature.
Investigating A Relevant Application: UpNAway
The second part of our research involved using D2D communication between Edison Intel devices to improve UpnAway. We chose this application because of its significance in improving cyber-physical systems which can contribute to various industrial areas such as aerospace, transportation, robotics and factory automation systems. UpnAway addresses the Cyber-Physical System community's unmet need for a scalable, low-cost and easy-to-deploy test-bed.
The UpnAway testbed, however, this testbed was centralized: The UAV Node Modules (Fig. 4) used to run on a central computer while streaming motion instructions to the drones. It is a given that a distributed system has higher scalability and stability than a centralized one. So we helped the Up and Away team leverage D2D communication to upgrade the testbed to a distributed version of itself. We did this by mounting an Edison device on each drone. Then we established D2D connections between the edisons, their respective drones and the central node. Now, instead of having all computation done in the central node, the central drone now only performs localization and informs each Node Module, now running on the Edison, of its drone's coordinates.
Future Work
In the future, we would like to study the energy consumption aspect of data transmission over D2D connections. Further, we are interested in measuring how these devices perform over different technologies in different environments: outdoors, indoors in an open corridor and inside cars, for example. Such measurements will contribute to the design of many developing IoT systems.
-
-
-
Wireless Car Control Over the Cellular System
More LessNowadays, electric robots play big role in many fields as they can replace humans and decrease the amount of load on humans. There are several types of robots that are present in the daily life, some of them are fully controlled by humans while others are programmed to be self-controlled. In addition there are self-control robots with partial human control. Robots can be classified into three major kinds: industry robots, autonomous robots and mobile robots which will be discussed. One of the main advantages of mobile robots is to provide safety by replacing humans to enter dangerous places like industrial areas, factories, underground rail tunnel, buildings after disasters, etc.
Our objective is to design and develop a mobile robot car that is capable of reaching the needed destination using a camera that will provide a surface monitoring, while being controlled using four directions controller embedded in an Android mobile phone application. It is operated over a cellular communication system (that will provide national and even international (through roaming) coverage for its working area) in parallel with its self-action in presence of obstacle. Its self-action is maintained by an ultrasonic sensors that will be mounted on the car body. This is of crucial importance as disaster areas usually loss their Wi-Fi connections.
Its main role is to provide an insight monitoring for disaster areas, damaged factories with hazard spilled over products and remote anti-terrorist protection. There are many areas where human can't enter due to hazardous and fatal conditions or small dimensions, for instance collapsed buildings, areas after disasters and earthquake, nuclear power plants and so on. For example, the great earthquake that occurred on March 11th 2012 and caused damage to the north part of Japan particularly in Fukushima Daiichi nuclear power plant. The disaster result in disabling the power supply and heat sinks which leads to release of radioactivity in the area surrounding the plant. Such areas and environment are very dangerous to enter by human being, in this case robotic car can be sent in order to search, discover and provide live communication.
The project is divided into three main units: robotic car, cellular communication system and Android application. The body of the robotic car is a plastic magician chassis programmed using an Arduino micro-controller with collaboration of sensors, motors and shields to avoid the obstacles and enable the surface monitoring of the car surrounding. The cellular communication system aims to build a cellular communication bridge between the Arduino micro-controller and the user interface controller, which is the third part as a user-friendly application which has been designed using Android Studio software, to control the robot through a smart phone at any place in the World. This project establishes two ways of communications; from the robot to the user showing the video content and status about the scenario. And from the user to the robot indicating the direction, distance and/or offloading demands.
We would like to present the project objectives, challenges and results, as well as a comparison to previous state of the art.
-
-
-
Adaptative Network Topology for Data Centers
Authors: Zina Chkirbene, Sebti Foufou and Ridha HamilaData centers have an important role in supporting cloud computing services (such as email, social networking, web search, etc.) enterprise computing needs, and infrastructure-based services. Data center networking is a research topic that aims at improving the overall performances of the data centers. It is a topic of high interest and importance for both academia and industry. Several architectures such as FatTree, FiConn, DCel, BCube, and SprintNet have been proposed. However, these topologies try to improve the scalability without any concerns about energy that data centers use and the network infrastructure cost, which are critical parameters that impact the performances of data centers.
In fact, companies suffer from the huge energy their data centers use and the network infrastructure cost witch is seen by operators as a key driver for maximizing data centers profits and according to industry estimates, the united states data center market achieved almost US\$39 billion in 2009, growing from US\$16.2 billion in 2005 Moreover, the studies show that the installed base of servers has been increasing 12 percent a year, from 14 million in 2000 to 35 million in 2008. Yet that growth is not keeping up with the demands placed on data centers for computing power and the amount of data they can handle. Almost 30 percent of respondents to a 2008 survey of data center managers said their centers will have reached their capacity limits in three years or sooner.
The infrastructure cost and power consumption are the first order design concern for data center operators. In fact, they represent an important fraction of the initial capital investment while not contributing directly to the future revenues. Thus, the design goals of data center architectures seen by operators are high scalability, low latency, low Average path length and especially low energy consumption and low infrastructure cost (the number of interface cards, switches, and links).
Motivated by these challenges, we propose a new data center architecture, called VacoNet that combines the advantages of previous architectures while avoiding their limitations. VacoNet is a reliable, high-performance, and scalable data center topology that is able to improve the network performances in terms average path length, network capacity and network latency. In fact, VacoNet can connect more than 12 times the number of nodes in FlatNet without increasing the APL. Also, it achieves a good network capacity even with a bottleneck effect (bigger than 0.3 even for 1000 servers). Furthermore, VacoNet reduced the infrastructure cost by about 50%, and the power consumption will be decreased with more than 50000 watt compared to all the previous architectures.
In addition, and thanks to the proposed fault tolerant algorithm, the new architecture shows a great performance even when the failure rate equals to 0.3, which means when about one third of the links failed, the connection failure rate is only 15%. By using VacoNet, operators can win till 2 million US dollars compared to Flatnet, Dcell, Bcube and Fattree.
Both theoretical analysis and simulation experiments have conducted and validated to evaluate the overall performance of the proposed architecture.
-
-
-
Narrowband Powerline Channel Emulation Platform for Smart Grid Applications
Authors: Chiheb Rebai, Souha Souissi and Ons BenrhoumaThe characterization powerline networks and investigation of communicarion performances has been the interest of several research works. Despite of its complex noise scenario and variable attenuation, narrowband powerline communication (NB-PLC) systems are key element in smart grids which provide different applications such as remote meter reading, home automation and energy management. Mainly located at the last mile communication, NB-PLC systems participate in monitoring and controlling power consumption at different level from local utilities to final customers in conjunction with wireless technology. To provide a proper communication flow over powerlines, it is primordial to get efficient designed and proven PLC systems responding to smart grids requirements. After PLC system design and development, a major task is the infield test which is a heavy task, time consuming and does not give enough information about system performance and robustness. In one hand, test and verification result are as variable as channel conditions. So, it is not obvious to get relevant information from classical infield test. In the other hand, the behavior of existing industrial PLC systems behavior cannot be accurately reproduced in simulation environment. Therefore, getting a flexible standalone platform emulating NB-PLC channel phenomena allows overcoming verification process difficulty. In literature, few research works are interested in emulating narrowband powerline.
First, the idea of emulating PLC channels using a stand-alone device has started in a broadband PLC by presenting a hardware platform based on a digital signal processor (DSP) and a field programmable gate array (FPGA). Then, algorithms have been developed and optimized to reduce complexity and improve real-time performance. Regarding NB-PLC, an emulator has been proposed for indoor channels in the frequency range between 95 and 148.5 kHz using analog to digital circuit for channel transfer function and noise scenario. More flexibility and accuracy in generating the NB-PLC channel behavior has been studied and extended to support frequencies up to 500 kHz. The main challenge was the definition of time varying reference channels and tunable sophisticated noise scenarios for emulation. Both proposed NB-PLC channel emulator only takes into consideration the attenuation as predefined function and noise phenomena. The ZC detection is supposed to be ideal that it does not affect the communication performance.
The objective of this research work consists in proposing a flexible DSP based NB-PLC channel emulator encompassing the channel bottlenecks interfering during communication. The channel attenuation is deduced using bottom up approach and appropriate noise scenarios are defined to meet as much as possible realistic phenomena. The effect of zero crossing variation is also taken into account. The overall parameters are optimized to be embedded on a DSP platform.
-
-
-
The impact of the International Convention on the Rights of Persons with Disabilities on the Internal Legislation of Qatar: Analysis and Proposals
Qatar ratified the Convention on the Rights of Persons with Disabilities (CRPD) on May 13 2008, and has signed its Optional Protocol, pending ratification.
All persons with disabilities, and to promote respect for their inherent dignity aim the CRPD at promoting the full and equal enjoyment of all human rights and fundamental freedoms. The Optional Protocol (OP) establishes a complaints mechanism. The CRPD creates no new rights for the individuals, but makes it possible the full exercise of human rights to all. Complying with the CRPD requires the states to introduce numerous adjustments to their internal legislation.
The study of high significance for Qatar. At any rate, Qatar has being doing significant efforts, and major progress in catering to the needs of people with disabilities. Already in 1995, Qatar issued the Law No. 38/1995, on aspects of the Social Security system, providing governmental assistance to social groups, including organizations of persons with disabilities. In the 1998, Qatar created the Supreme Council for Family Affairs (SCFA, Decree No. 53/1998), a high-level national body that among other things has the mandate to deal with the implementation of those international conventions related to the rights of children, women, and persons with disabilities which has been ratified by Qatar. Following the SCFA's recommendations, Qatar passed in 2004, Qatar passed the Law No. 2/2004, for the protection of people with special needs, which ensures the rights of persons with disabilities in all fields.
According to that law, the people with special needs enjoy special protection in the State of Qatar, by means of:
- Special education, health treatment, disease prevention and vocational training;
- Receiving all the tools and means to facilitate their learning and mobility process;
- Receiving special qualifications and training certificates upon completion of certain training programs and appoint them in areas that would accommodate their obtained skills and training;
- Dedicating around 2% of the jobs in the private sector to people with special needs without any discrimination based on disability.
Nevertheless, the implementation of these efforts is till a work in progress. A United Nations Special Rapporteur on Disability, reported after a brief mission in Qatar that there is “a clear commitment from Qatari society to the needs of persons with disabilities”, which are tangible at Shafallah Centre for Children with Special Needs and at Al Noor Institute for the Blind. The rapporteur stressed that “”it appears that there is a clear commitment from the State and the private sector toward the issues confronting persons with disabilities in Qatar. Anecdotal evidence suggests that the private sector is a big contributor to institutions [for people with disabilities]”. Nevertheless, the rapporteur warned that “it also became clear that much of the caring and development remain almost exclusively disability-specific as opposed to the mainstreaming of the development needs of persons with disabilities. There appears to be a distinct lack of mainstreaming of disability in Qatar”[1].
In 2010, the International Disability Alliance, the global network aimed at promoting the effective and full implementation of the CRPD, recommended that “Qatar adopt a proactive and comprehensive strategy to eliminate de jure and de facto discrimination on any grounds and against all children, paying particular attention to girls, children with disabilities”[2]
The significance of this project comprehends several facets. This project is aimed at helping fulfill Qatar's commitment as requested by the CRPD. Beyond that, and above all, this project would have the direct potential to yield tangible benefits for people with disabilities in Qatar, by helping remove barriers that may prevent their full integration into mainstream society might made harder their personal and professional development. Moreover, this project should help the people with disabilities to become visible in Qatar.
The objectives of the proposed study are:
To analyze the impact that the ratification of the CRPD has on the Qatari legal system;
To elaborate a general framework aimed at defining possible ways in which the Qatari legal system could better develop the mandates of the CRPD;
To elaborate recommendations about possible modifications to the internal Qatari legislation in order to specifically incorporate the mandates of the CRPD into the law of the country.
According to the Census 2010, carried out by the Qatari Statistics Authority, the total number of people with disabilities in Qatar is 7,743, which represents 0.45% of the total population of 1,699,435 inhabitants. Among non-Qataris, the percentage of people with disabilities is of a 0.28%, while among Qatari nationals the figure is six times higher, 1,71%, with 2972 persons with disabilities on a population of 174,279. All these numbers appear as significantly low by international standards, where people with disabilities compose around 10% of the population. Some phenomena could help explain these numbers, though [3].
The significantly low number of non-Qataris with disabilities can be explained by the fact that that group of population are mostly young, healthy workers who come to the country with a work contract, and after having passed medical tests of aptitude for the position. The disability among Qataris would, thus, reflect better the natural rate of disability in the country. However, this rate is still too low for international standards. The causes to this low number could be multiple. First, it may show that in Qatar there is not yet a full awareness about what constitutes a disability, and thus people do not declare themselves–or declare family members–as disabled. Second, it is possible that the tightly knitted family structure provides for the disabled, and thus little or no support is requested for outside the family, causing a considerable under-registration of cases of disabilities, since the disabled do not reach out for specific social or health services. Third, it is also possible that despite all efforts, Qatar still has a relatively poor network of services aimed at satisfying the needs of the disabled, who that way go under-detected in the official statistics. At any rate, the numbers show that there is significant room in Qatar for the development of strategies aimed at raising awareness and implementing programs and services for the disabled.
In that sense, this project might produce very valuable information, and propose high-impact measures to further the protection that Qatar provides to people with disabilities. This project should have, thus, a critical significance to people with disabilities in Qatar, and also to their families.
The Qatari population at large would benefit from this project, since it is ultimately aimed at helping integrate a group of people–the disabled–whose contribution to the country's human capital can be of extreme value, in a context of a growingly complex, diverse global society.
Finally, this project can become a valuable tool to help reaffirm Qatar's leadership in the region in matters of human rights and human development.
[1] Report of the Special Rapporteur's Mission to Qatar - Preliminary Observations (9 – 13 March 2010). http://www.un.org/disabilities/documents/specialrapporteur/qatar_2010.doc
[2] IDA CRPD FORUM Suggestions for disability-relevant recommendations 7th UPR Working group session (8 to 19 February 2010). http://www.internationaldisabilityalliance.org/sites/disalliance.e-presentaciones.net/files/public/files/UPR-7th-session-Recommendations-from-IDA.doc
[3] Qatar Statistics Authority. Census 2010. http://www.qix.gov.qa
-
-
-
The Doha Paradox: Disparity between Educated and Working Qatari Women
Authors: Mariam Bengali, Tehreem Asghar and Rumsha ShahzadQatar is considered one of the best places in the world for women to get an education. Research has shown that for every man, there are six women enrolled in tertiary education. This upward trend in the willingness and ability of women to receive higher education is undeniably encouraging. However, though labelled a “vital element within the development process” of Qatar, the Qatari women's role in the labor market is, at best, limited. Recent data demonstrates that the participation of women in Qatar's labor force was a meagre 35%. Qatar, however, has made the empowerment of women in the labor market a significant part of its Development Strategy. The designers for Qatar National Vision have formulated its first National Development Strategy (2011–2016) with Human development being one of the four major pillars of this strategy. One of the aims of Human Development under NDS (2011–2016) is to increase opportunities for women to “contribute to the economic and cultural world without reducing their role in the family structure.” This research, therefore, intends to analyze a) Qatar's success in carving out a more vital role for its female citizens and b) the obstacles in the realization of their goal to establish a more gender-inclusive labor force. The reasons for this analysis are, therefore, not solely to augment and scrutinize Qatar's Development Strategy but to demonstrate that Qatar's extensive investment in education will not reap benefits if the majority of its educated does not take advantage of the various avenues their learning opens up. Whether this is due to unwillingness on the part of women to work or due to gender neutral reasons such as the gap between education, training and job placement or other motives; this research aims to ascertain the reasons for this difference.
-
-
-
Measuring Qatari Women's Social Progress through Marriage Contracts
Authors: Mohanalakshmi Rajakumar and Tanya KaneContemporary Qatari women's social progress can in part be measured through an analysis of current marriage practices. The Islamic marriage contract is a legal and religious document wherein Muslim brides indicate their expectations for post-marital life and an essential step in the marriage negotiation process. The conditions they stipulate in their marriage contracts are symbolic of the degree to which they exercise agency in their personal and professional lives as wives. As part of a larger study on marriage practices in Qatar, we collected and analyzed marriage contracts from a broad range of Qatari families. We treated these documents as archival evidence, reflecting changing bridal expectations from 1975–2013. A content analysis of contracts in our sample demonstrated an increase in the age at marriage for both Qatari men and women. The contracts also show the major areas in which brides negotiated the terms of their married lives including educational, professional, and household expectations. We read these stipulating conditions as moves to guarantee autonomy as wives.
-
-
-
Globalization and Socio-Cultural Change in Qatar
More LessGlobalization is impacting many aspects of life in Qatar and Qatari nationals must increasingly cope with forces generated by economic, cultural, political, and social changes in their country. Because of borrowing, large scale migration, new computer technology, and multinational corporations, many cultural traits and practices in Qatar have been altered. The widespread existence of fast foods, cell phones, the internet, Western movies and TV shows, global electronic and print media, giant shopping malls, and latest fashion designs are some excellent examples of direct and indirect diffusion and exchange of products and cultural features between Qatar and the western world. Despite the continuing positive social and economic outcomes of modernization and development, the inevitable ongoing powerful economic and social changes in Qatar have put the country at a crossroad and created formidable cultural challenges for Qatari nationals. On the one hand, the social and economic consequences of Qatar's development and modernization by 2030 will increase the mutual dependence between Qatar and the expanding global economy and strengthen the continuing cultural contact and interconnection between Qatar and the global culture. On the other hand, in conjunction with the rapid economic and social changes, the country must also commit to making its future path of development compatible with cultural and religious traditions of an Arab and Islamic nation in Qatar. The crucial challenge between continuing the move toward socio-economic development and preserving the Arab-Islamic tradition in Qatar as stated in the Qatar National Vision 2030: “Qatar's very rapid economic and population growth have created intense strains between the old and new in almost every aspect of life. Modern work patterns and pressures of competitiveness sometimes clash with traditional relationships based on trust and personal ties, and create strains for family life. Moreover, the greater freedoms and wider choices that accompany economic and social progress pose a challenge to deep-rooted social values highly cherished by society.” (Qatar National Vision 2030, 2008:4). To minimize the anticipated “intense strain” between the old and new aspects of life and to avoid the clash between traditional cultural values and the emerging modern patterns of social life in Qatar it is crucial for government officials, especially the policy makers in Qatar to understand the perception of Qatari men and women about the ongoing changes and their outcomes and its impact on their culture, religion, family, and social life. Therefore, as an anthropologist I propose to explore the following critical issues that is related to the social and cultural consequences of modernization of Qatar by 2030:
How will the new generation of Qatari nationals internalize the economic, environmental, human, and social developments envisioned by QNV 2030?
Do Qatari nationals find the current economic, social, and cultural changes consistent with core values of their culture or do they feel threatened by these changes?
What strategies, if any, Qatari nationals have devised to deal with threats to their cultural and religious identity?
How do Qataris perceive and feel about the ongoing large-scale social and economic changes and interpret their manifestations and outcomes?
Do they contest or embrace these changes?
Will the socio-economic changes in Qatar sculpt and drive cultural norms and impact cultural practices in Qatar, or the traditional family structure, religious beliefs, and will cultural assumptions direct the process of social change in Qatar?
What will be the impact of the QNV 2030 changes on the national psyche and cultural identity of Qatari citizens?
Will Qatari citizens form a new identity and transform their old religious and cultural identity, maintain the traditional religio-cultural identity, or find a balance between the old and a new one?
This project will offer several significant applied and practical outcomes and benefits for government leaders and policy makers in Qatar. In addition to exploring the perceptions of Qatari nationals toward social and economic change and their consequences for Qatari culture, this project provides an excellent opportunity to identify and assess the: intended and unintended changes in attitudes and behaviors of Qatari men and women regarding marriage, family, work, education, and related social patterns as well as values, ideas, symbols, and judgments, new cultural adaptive kits that are likely to emerge, degree of consistency between the new emerging cultural patterns of behavior and changes in attitudes and behaviors with cultural traditions and social values of Qatari men and women, different mechanisms through which Qatari people combine modern life with local traditions and cultural values, and different meanings that different individuals and groups (e.g., ethnic, gender, class, and age) attach to the ongoing cultural and social changes in Qatar. Furthermore, this project on the impact of social and economic change on Qatari culture and society will provide government officials in Qatar with a new perspective so they can understand the link between the private lives of their citizens and the larger social and cultural issues, and the impact of social change on communities and social institutions in Qatar. This new perspective will enable government and business leaders in Qatar to: chart economic and social progress more effectively and with a clearer vision, assess the life chances of their citizens in a new globalized Qatar, face future problems successfully, and build a stronger bridge between the present and the future through Qatar National Vision 2030. Finally, this project will enhance our understanding of the social and cultural consequence of globalization for Islamic societies in general and GCC in particular. It will help social scientists to understand the unique socio-cultural characteristics of Islamic countries in the Gulf region in their confrontation with the West and global forces. Moreover, it will add to our knowledge about the perception of Muslim Arabs in the Gulf region about the powerful technological, political, and social changes that are taking place in this region. The findings of this project will help social scientists to explain whether Qatari nationals find these changes incompatible with their cultural traditions and resist against them, or find them compatible with the cultural fabric of their society and embrace them.
-
-
-
Water Resources and Use in Qatar Prior to the Discovery of Oil
More LessWater Resources and Use in Qatar Prior to the Discovery of Oil Qatar is known as one of the most arid countries in the world with all of the land characterised as dessert which in geographical terms is defined as territory with no surface water. Despite this unpromising situation people have lived on the Qatar peninsula for thousands of years carefully conserving, harvesting and using the scarce water resources. This paper will argue that a better understanding of the conservation and utilisation of fresh water in the past may have implications for the future development of agriculture and water management in Qatar. All the water resources in the past were fed either directly by scarce and sporadic rainfall events or through sub-surface water (fossil water) which formed a fresh water lens floating above a predominantly saline aquifer. The presence of shallow wad is located throughout the country but particularly near the coast are indicative of the occasionally heavy rainfall. This paper will investigate the methods used to provide water to the traditional settlements and more ancient archaeological sites throughout the country. The paper will begin by reviewing the hydrological structure of the Qatar peninsula and its condition prior to the modern over exploitation of water resources from the 1950's onwards. The paper will pay particular attention to the location of settlements around the northern coast where particularly favourable conditions exist as a result of the north Qatar arch. The settlements of the northern coast employed a variety of water catchment methods including modifying shallow natural depressions (rawdhas) to provide either grazing land or in some cases agricultural zones. In some cases such as at Jifara extensive sunken field systems enclosed within mud brick walls were created fed by a series of shallow wells taking advantage of both rainfall catchment and fossil rainwater. In some cases such as at Ruwayda on the north Qatar coast an existing rawdah appears to have been modified to create a garden with trees enclosed by a fortified masonry wall. A variety of well forms were created including shaft wells where water was accessed by buckets or other receptacles lowered by rope (two example of traditional leather buckets with goat hair rope have been found in archaeological excavations in Qatar) and wide shallow stepped wells which allowed direct access to the water either for humans or animals. In the centre of the country access to water has always been more difficult because the freshwater acquifer has been closest to the surface near the coast. As a consequence the majority of human settlement in Qatar has always been close to the coast. Where inland settlement do exist they are usually located either within or next to some geographical feature such as a wadi or a large depression which will act as a catchment area for rainfall. Such settlements are nearly always supplemented with wells which tend to be larger and much deeper than those on the coast. Examples of inland wells include those associated with gardens at Umm Salal Muhammad. The transformation of inland wells with motorised pumps from the 1950's onwards was one of the causes for the depletion and subsequent salinization of the freshwater aquifer in Qatar. In addition to wells and modified rawdahs a number of other forms of water catchment exist. One of the most surprising water sources is the perennial spring (naba‘a) which following periods of heavy or sustained rainfall may result in water literally spring out from the ground. Another unusual source are fresh water springs located off the coast which were traditionally exploited by fishermen and pearl divers. A rare form of water catchment exists in the Jabal Jassasiya rock formations on the north-east coast of Qatar where the natural contours of the rock are modified to form a cistern blocked on one side by a masonry dam. Masonry dams- possibly of medieval date- have also been documented near Umm Salal Muhammad.
-
-
-
Qatar's Standing in Global Energy Governance Institutions
Authors: Lawrence Saez and Harald HeubaumQatar's position in the oil and gas market has changed dramatically since the discovery of oil in the Dukhan field in January 1940. Following its independence in 1971, the Qatari economy has been radically transformed as a result of the discovery of the South Pars/North Dome condensate-gas field in 1971. The discovery of the natural gas field and its subsequent exploitation rapidly made Qatar one of the wealthiest countries in the world (World Bank 2013). Given the transnational characteristics of the South Pars/North Dome field (shared between Iran and Qatar), under the leadership of former Emir Sheikh Hamad bin Khalifa Al-Thani, Qatar began playing a leading international role in global energy markets and gas governance institutions. Qatar's international position in energy markets was furthered by former minister of foreign affairs Sheikh Hamad bin Jassim Al-Thani since 1992. Qatar's economic prosperity is directly linked to the sustainability of the wealth generated from natural gas revenues. As such, Qatar has sought to further its leadership role in natural gas governance institutions. This project aims to analyse the role of Qatar as a critical player in emerging global energy governance architectures. The international relations literature on energy governance hastended to focus on the role of the state and markets in the governance of specific energy sector commodities, like oil or natural gas (Lesage, Van deGraaf and Westphal, 2010; Victor, Hults and Thurber, 2012). Other strands in the literature (Goldthau, 2012; Goldthau and Witte, 2010; Van de Graaf, 2013; Victor and Yueh, 2010) have highlighted the growing importance of international institutions and fora (e.g., the International Energy Agency or the G20) in constructing an emerging global energy governance architecture, though these institutions need not be energy commodity specific. In building upon these theoretical approaches, our project attempts to provide more concrete evidence on how specific players (or countries) exercise their influence at a systemic level, particularly in terms of the governance of energy sector commodities within international institutions. In our research project, we offer a case study of Qatar's growing importance in international energy fora, focusing on its involvement in international institutions dealing with natural gas. International institutions and fora relevant to this research include the International Energy Forum's (IEF) natural gas dialogue, the Gas Exporting Countries Forum (GECF), the International Gas Union (IGU) and the International Association of Oil and Gas Producers (OGP). Unlike the International Energy Agency (IEA) which has received significant scholarly attention in recent years, international institutions focused specifically on natural gas have not been systematically analysed to date despite their important role in collecting and providing industry information, promoting the development of technologies, setting internationally accepted standards, advocating common policy positions and supporting all aspects of governing the industry's upstream and downstream operations. The reason for this lack of industry-related information provision and the voicing of common policy positions is due to the emerging nature of natural gas governance institutions. Scholarly attention to global energy governance institutions has identified the fragmented nature of inter-state energy governance institutions (Leal-Arcas and Filis forthcoming, Dubash and Florini 2011). As the international regime on natural gas grows in importance in global energy dialogue settings, we anticipate that there will be an increasing formalisation of institutional coordination in wide-wide information and advocacy for specific policy coordination outputs. Further, membership in these institutions and fora is significantly broader than membership in the IEA, which is restricted to members of the Organisation for Economic Co-operationand Development (OECD). Even in that particular instance, membership of the OECD does not automatically guarantee membership in the IEA. Member countries of the IEA must demonstrate that as a net oil importer, the country has reserves of crude oil and/or product equivalent to 90 days of the prior year's average net oil imports, that the country has a demand restraint programme for reducing national oil consumption by up to 10%, legislation and organization necessary to operate on a national basis, coordinated emergency response measures, legislation and measures in place to ensure that all oil companies operating under its jurisdiction report information as is necessary (International Energy Agency 2013). In contrast, membership in the IEF, the IGU and the OGP -which straddle the producer-consumer country divide- is more diffuse, thus making them fruitful cases to analyse Qatar's relationship with other major players in international gas markets and assess its evolving role in emerging global energy governance architectures. At a macro level, the proposed project is innovative because it will be the first effort to analyze Qatar's growing influence in global energy governance institutions dealing with natural gas. Moreover, the project is innovative because it adapts a well-known methodological technique in computational sociology (i.e., social network analysis) to a wider application in international relations research. This methodological technique is used because it will best help us understand the power relations between actors in a global setting. In addition, the research will utilize two other cutting-edge methodological tools used in interdisciplinary social science research, namely multi-value qualitative comparative analysis (mvQCA) and fuzzy-set qualitative comparative analysis (fsQCA). These two techniques will enable us to identify multiple sets of covariate combinations that consistently are associated with a particular output value, specifically as they pertain to the causal factors leading to the emergence of key international relations actors in the global energy governance environment. At a micro level, our research project will offer a detailed timeline to explain Qatar's ascendency in global energy governance, visualizing the development of Qatar's influence over time as well as revealing important insights into the density and strength of the actor network itself. In turn, this will enable predictions about the sustainability, impact and future of Qatar's engagement. By analysing Qatar's participation in energy governance institutions, the proposed research project engages directly with the Qatar National Research Strategy (2012) goals and objectives dealing with the international affairs (SAH 3.1) and public policy, governance and regulation (SAH 3.2). Moreover, in addressing Qatar's growing role in the natural gas markets we would be contributing towards the expanded demand for natural gas objective (EE 1.3).
-
-
-
Who Supports Political Islam and Why?
By Mark TesslerWho Supports Political Islam and Why? An Individual-Level and Country-Level Analysis Based on Data from 56 Surveys in 15 Muslim-Majority Countries in the Middle East and North Africa Background and Significance Islam today occupies a central place in discussions and debates about governance in the Muslim-majority countries of the Middle East and North Africa. Indeed, whether, to what extent, and in what ways Islamic institutions, officials and laws should play a central role, or at least an important role, in government and political affairs are among the most important, and also the most contested, questions pertaining to governance in the region at the present time. As Ali Gomaa, the Grand Mufti of Egypt, wrote in April 2011 in connection with the democratic transition struggling at the time to take shape in his country, Islamist groups can no longer be excluded from political life but neither does one group speak for Islam nor should the nation's religious heritage interfere with the civil nature of its political processes. Thus, he concluded, Egypt's revolution has swept away decades of authoritarian rule but it has also “highlighted an issue that Egyptians will grapple with as they consolidate their democracy: the role of religion in political life.” Concerns about the place of Islam in political affairs, and about the relationship between democracy and Islam, are equally important elsewhere in the region. The Secretary General of Tunisia's Islamist al-Nahda Party, Hamadi Jebali, described the political challenges facing his country in a May 2011 public lecture and asked, “What kind of Democracy for the New Tunisia: Islamic or Secular?” And again, about the same time, an Iraqi constitutional lawyer and media personality, Tariq Harb, wrote that a central element in the struggle to define his country's political future is the question of how “to balance religion and secularism.” These and many similar statements addressed to the question of Islam's role in government and political affairs were made against the background of political transitions set in motion by the spontaneous and frequently massive popular uprisings that shook the Arab world at the end of 2010 and the first months of 2011 – events popularly known at the time as the Arab Spring. Initially in Egypt and Tunisia, but soon elsewhere as well, most notably in Bahrain, Yemen, Syria and Libya but also in other countries, protesters came into the streets and public squares to express their anger at decades of what they believed to be misrule by governing regimes that were authoritarian, corrupt, and concerned only with their own privilege and that of their friends. Islamist parties and movements were not in the forefront of these uprisings, although in at least some countries they were involved, sometime heavily, in subsequent political developments. But the questions raised at the time by Gomaa, Jebali, Harb and many others were not only, or not primarily, about the political space that should or should not be given to Islamist movements. These questions were deeper and more fundamental, and they were not new even if possibility of a political transition gave them increased salience and greater urgency in some countries. At most, the uprisings and possibilities for a political transition intensified long-standing and largely unresolved debates about whether, how, and to what extent a country with an overwhelmingly Muslim population should be ruled by a government and legal system that are in some significant way meaningfully Islamic. Hypotheses and Analyses It is against this background that my study focuses on the perceptions, judgments and preferences of ordinary citizens about the role that Islam should play in government and political affairs. Drawing on a new dataset constructed by the merging of 56 nationally representative public opinion surveys carried out between 1988 and 2014, I test hypotheses about the explanatory power in shaping attitudes toward political Islam of (1) cultural values, such as those pertaining to the status and rights of women; (2) political evaluations, specifically those concerning the legitimacy and performance of the government in power; (3) economic factors, specifically the degree to which the economic circumstances of the individual are advantageous or disadvantageous; and (4) information and exposure, particularly the variation in the learning experiences associated with education. Judgment about political Islam is the dependent variable in the regression analyses by which these hypothesis and the causal stories associated with each are tested. The analysis includes control variables, including, and particularly, personal religiosity. Although the data are pooled in the initial analyses, they are subsequently disaggregated by gender and age, taken in combination, in order to see if whether or not each hypothesis and the associated causal story has more explanatory power among some subsets of the population than others. These regressions are also run separately for individuals who reside in countries where the government has a strong Islamic connection and countries where such a connection is weak or non-existent. To the individual-level analysis outlined above is added a survey-level, or country-level, analysis in which findings are mapped across countries. More specifically, findings about individual-level relationships, both in general and for demographic subsets of the population, are treated as dependent variables and country attributes are treated as independent variables. Dependent variable measures are both (1) whether or not the individual-level relationship is statistically significant in each survey, and (2) the survey-specific regression coefficients resulting from the individual-level analysis. Independent variables include a large number of political, economic, societal, and demographic country attributes. Information about each attribute at the time the country was surveyed, and also lagged measures for many attributes, was collected and included in the dataset. The addition of this second level of analysis makes it possible not only determine the degree to which various individual-level factors play a role in shaping the attitudes toward political Islam held by ordinary citizens, and whether and how this differs across within-country demographic subsets of the population, but also to identify and characterize the national political, economic, and societal environments in which any of these individual-level causal stories is disproportionately likely to obtain. Data and Measures Space does not permit providing at present a detailed description of the data and a full account of the measures to be employed. Instead, I am attaching a PPT that includes among its many slides: a list of the 56 surveys in the dataset (N = 82,489), giving the country and year in which each was conducted and the sample size; a list of the country attributes on which information has been collected and included in the dataset; and a list of some of the questions about Islam's role in government and political affairs that have been employed to derive a measure of attitude toward political Islam. Almost all surveys contained several of these questions, although in a small number of surveys only one or two were asked. Although beyond the scope of the present discussion, measurement operations included assessments of validity and reliability and procedures to establish conceptual equivalence between measured derived from surveys in which only some of the same questions were asked. Finally, more broadly, the PPT presents preliminary findings from a partial analysis of the data.
-
-
-
Exploring the Lived-Experiences of Caregivers Caring for Elderly Persons in Qatar
More LessIntroduction
In many Arab countries including Qatar, the family as a social institution is thought to be the cornerstone of society. The family is particularly important for older individuals, especially when their physical and/or mental health declines to the point where they can no longer function independently. Once this happens, family members (e.g. spouse or child) often become informal caregivers for the individual requiring care (Chappell, McDonald, Lynn, & Stones, 2007). Caregiving often requires caregivers to acquire specialized knowledge unique to the needs of the individual they are caring for, meet with healthcare professionals at different stages of the caregiving process, and gain unique skills often associated with the work of healthcare professionals (Leiter, Krauss, Anderson, & Wells, 2004). Lastly, caregiving roles are ever changing as both the caregiver and care-receiver age and new challenges present themselves. There is an abundance of social research on the experiences of caregivers for aging individuals whose health status has declined. This existing research has primarily focused on the burdens of caregiving and the different strategies and resources caregivers use to cope with the demands of caregiving. Recently, research has suggested that while caregiving can be burdensome, many benefits can also result for caregivers from their caregiving experience (Corman, 2009). However none of this research has been conducted in Qatar and only a few studies have been done in the Arab region. This research gap is especially problematic since reports suggest that there are problems in older persons' care in Qatar (ESCWA, 2013).
Purpose
This study addresses this aformentioned gap in social scientific inquiry by investigating the experiences of Qatari and non-Qatari caregivers in Qatar who provide care for older family members. More specifically, we focus on exploring the stresses and burdens of caregiving, coping strategies and resources of caregivers, and the benefits of caregiving. As such, this research focuses on the lived-experiences of caregivers and the consequences for caregivers (not only negative) that personal home caregivers experience when caregiving for elderly persons who require support.
Theoretical Framework and Method
One of the dominant models used in exploring the lived-experiences of caregivers is the stress process and coping-process models of Pearlin, Mullan, Semple and Skaff (1990) and Lazarus and Folkman (1984) respectively. Pearlin et al. (1990) provided a conceptual framework that looked at the stressors associated with caregiving; the model focuses on the many related relationships, and the developing and changing nature of these relationships over time, eventually leading to stressor-outcomes. This model of stress allows for the investigation of how conditions develop and are interrelated to each other. One way of dealing with stressors and the burdens of caregiving is to use a variety of resources and strategies that act as buffers against them. Lazarus and Folkman (1984) described coping as a shifting process that varies depending on the stressful encounter or experience, allowing for individuals to mediate the effects of stress on their well-being by managing both internal and external demands that are appraised as taxing (Kelso, French, & Fernandez, 2005; Folkman & Moskowitz, 2004). Folkman (1997) described coping as consisting of five different resources: social support networks, utilitarian resources, general and specific beliefs, problem-solving skills, and an individual's health, energy, and morale. In order to gain a more complete understanding of caregiving in Qatar, it is important to account for both the positive and stressful experiences of caregiving, in addition to coping strategies and resources utilized throughout the caregiving experience. The stress-coping process models of Pearlin et al. (1990) and Folkman and Lazarus (1984) were chosen as the conceptual framework because they allow for a scope that looks beyond adjustment and toward positives, thereby forcing attention to both the stressors and benefits of caregiving (Kelso et al., 2005; see also Corman, 2009). Because this study aimed to explore the lived-experiences of Qatari and non-Qatari caregivers caring for older family members in Qatar, this research employed a qualitative research design in order to gain a better understanding of the phenomenon under study; it relied mainly on participants' views of the situation being studied and draws attention to its complexity (Creswell, 2003). Aligned with qualitative research, this study therefore used an inductive approach to generating knowledge by beginning with interviews and moving towards identifying patterns based on the experiences of participants (Rudestam & Newton, 2001). The primary methodological approach used in this study was transcendental phenomenology as purported by Husserl (1970) and modified by Moustakas (1994). Transcendental phenomenology is a qualitative research strategy and philosophy that allows researchers to identify the essence of experience as it relates to certain phenomenon as described and understood by participants of a study (Creswell, 2003; Nieswiadomy, 1993). As Moerer-Urdahl and Creswell (2004) explained, this approach is valuable when a phenomenon is identified that needs further investigation and has individuals who are available to provide descriptions and insights into the phenomenon. Utilizing qualitative, semi-structured interviews with open-ended questions based on the methodological approach of transcendental phenomenology can assist in gaining a better understanding of the themes that arise in peoples' descriptions of the stressors, benefits, and coping strategies and resources for this sample of caregivers. With this said, phenomenology was chosen because of the researchers' intent to explore and gain a better understanding of lived-experiences of Qatar and non-Qatari caregivers in Qatar.
Findings
In total, we interviewed 22 Qatari and non-Qatari caregivers aging in range from 20–50 years. Two of the caregivers were male and 20 were female. The findings reported in this presentation/poster focus on the different stressors and joys of caregiving and coping strategies and resources caregivers discussed. We also discuss the implications from these findings.
-
-
-
Meanings of Women's Agency: Improving Measurement in Context
Authors: Yara Qutteina, Laurie James-Hawkins, Buthaina A. Al-Khelaifi and Kathryn M. YountDecades of research has been conducted to understand the processes that under gird women's empowerment and one of its core components–women's agency. However, few inroads have been made into the study of how these processes work in Arab Middle Eastern societies. In fact, research on women's agency in the Arab Middle East has generally relied on measurement instruments that have been adapted without rigorous testing. This study is the first in Qatar to explore how women in Qatar understand women's agency scale items.
Aim
The aim of this study is to explore women's interpretations of selected scale items about decision-making, freedom of movement, and gender attitudes.
Methods
Cognitive interviews were conducted with 24 Qatari women ages 18–21. These women previously responded to agency scales as part of a bigger two wave survey study on influences of kin on women's participation in the labor market. The semi-structured cognitive interviews explore one decision-making item, one freedom-of-movement item, and five gender attitude items. Grounded theory analysis techniques were used. Women's responses were coded and analyzed for themes and patterns.
Results
For the decision-making item, the majority of women originally reported that they made their own decisions; yet probing revealed family input as an important part of the decision-making process. We conclude that the response options for this item were not uniformly interpreted by participants, and this variation in interpretation results in the group of women who reported making the decision on their own to be more heterogeneous than the researchers intended. Women's multiple interpretations of the decision-making scale suggest that the item was too vague for the context in which it was measured. On the other hand, women seemed to understand the item measuring freedom of movement as the researchers intended, as almost all participants easily indicated that they needed input from others on the freedom-of-movement item. We conclude that the uniformity in responses is due to the specificity of the item which led to women interpreting the item as intended. Women's responses to gender attitude items were reflective of broader Qatari societal norms rather than their own individual opinions. In their survey responses, women participants reflected less gender-equitable attitudes on some items and higher gender-equitable attitudes on other items. When probed during the cognitive interviews, inconsistencies appeared between their initial responses and their subsequent discussion of gender roles in Qatari society. It appears that these young women are caught between their own beliefs about gender equality and larger Qatari societal norms. These conflicts resulted in inconsistent responses across the gender attitude scale.
Conclusion
Agency measures commonly used in the Arab Middle East are not necessarily appropriate for such a context, especially when used with Qatari young women. Generally, the scale items tested revealed that the items were interpreted in different ways by different women. This highlights the need for deeper exploration into women's understanding of agency scale items before their use in new social contexts. Accordingly, we recommend that scale items be systematically tested whenever a researcher wants to field it in a new cultural context to determine if it is being interpreted consistently across women, and in line with the researchers intent. It is also important to identify scale items which may elicit responses that are representative of societal norms rather than personal beliefs. We recommend that such items are modified to encourage women to express their own opinions.
-
-
-
Effect of Intensive Weight Loss Camp and Maintenance Clubs on Overweight School Children in Qatar
Obesity and overweight continue to raise in Qatar due to a confluence of factors such as genetics, overeating, inactivity, tradition of food-centered social events, convenience, and advertising of energy-dense fast foods, and hot climate making outdoor activities impractical most of the year. Estimates by experts within and outside Qatar point to an extremely high rate of obesity and overweight in Qatar, with the World Health Organization placing the rate at 78%. This places Qatar among the top of countries worldwide in the overall prevalence of obesity and overweight. Childhood obesity in particular has also been rapidly increasing with the combined rate of obesity and overweight hovering around 40%, up from below 30% less than ten years ago. This trend is alarming due to the increased risks for obesity-related conditions such as diabetes, coronary heart diseases, and lower quality of living. Hence, comprehensive obesity prevention interventions are needed to stem the rise of obesity among Qatari children. This study was conducted to evaluate the effectiveness of an integrated weight loss intervention incorporating lifestyle education, physical activity, and behavioral psychology nudges among Qatari school children. The intervention was designed to integrate family and school support and fit within Qatari school system calendar and schedule. The study was branded Agdar/أقدر and conducted by an interdisciplinary team of collaborators from Qatar (Qatar University, Supreme Education Council, Aspire, Hamad Medical Corporation) and external partners (Imperial College, Leeds Metropolitan University/MoreLife, UK).
In the first year of a three-year intervention study, four randomly chosen schools in Qatar participated in the intervention with a total of 941 Qatari children (316 girls and 625 boys) between 9 to 12 years of age of whom 430 children were qualified to participate in the study. A group of four other randomly chosen schools served as control. Out of 430 qualified children, one hundred children (50 boys and 50 girls) with BMI in the 95th percentile from the intervention schools were enrolled in a two phase weight loss intervention. Phase 1 consisted of an intensive weight loss camp with a highly structured set of activity which combined a range of physical activity, lifestyle learning, dietary control, behavioral nudge techniques, and social activity. The second phase consisted of a ten week after school sessions on lifestyle education and weight management for those children who successfully completed camp. These after school/community clubs were run on school premises to facilitate integration in school schedule. The two phases were designed to be complementary: the camp helps children lose weight and introduces them healthy lifestyle behavior, whereas the after school phase embeds/consolidates the knowledge already learnt and helps in long term weight management. During the camp, children participated in a range of structured interactive and skill based activities including a mixture of water based activity, contact games and electives, where the children were able to choose from a range of physical activities. At camp, participants were subjected to a series of assessments including anthropometric (Weight, Height, BMI, Waist Circumference, Blood pressure), Lifestyle and Physical Activity Questionnaires (diet and physical activity), and Psychometric assessment (self-esteem and subjective well-being). During the clubs, only anthropometric measurements took place to ensure the children get the most out of the sessions focusing on reward and recognition and celebrating success regardless of the magnitude of the health improvement. This phase was designed to provide children and parents with the tools, know-how, and the confidence to carry on with the new healthy lifestyle at home as means to ensure durable weight management.
Data show that out of the 941 children in intervention schools, 430 children or 45.7% were either overweight or obese, having BMI in the top 95th centile by age. This rate is higher than the 42% we observed in a pilot study conducted by our team in 2014 and the 40% prevalence of overweight and obesity among children reported by other studies.
A total of 100 children aged 9–12 completed the camp with a significant reduction in percent BMI SDS of 12.5% (p < .001). The average percent BMI SDS reduction was higher for girls than that of boys (11 vs. 14%). This percent BMI SDS reduction is four times the minimum BMI SDS reduction (3%) required for health benefits in adolescents. The camp also resulted in a significant improvement in self-esteem (p < .001) with females edging males in terms of improvement in self-esteem. A slight but not significant improvement in subject wellbeing was also observed between the start and end of camp (p = 0.128).
These improvements in percent BMI SDS reduction (weight loss) and self-esteem occurred in a group that reported an unhealthy lifestyle profile with respect to physical activity and diet. In fact, participant responses painted a profile characterized by little or no physical activity (1 to 2 times/week) with two thirds of participants reporting fewer than 3 occasions of physical activity in their previous week. Participants' diet was characterized by low intake of fruits and vegetables and high intake of calorie-dense foods including sweets, soft drinks, and fast foods. Girls reported eating more fruits than boys but they seem to indulge more frequently in sweets.
As the camp phase resulted in a significant weight loss among all participants (100% of participant lost weight at variable levels), particularly girls who were more serious in participation, the clubs were found to help participants in weight management. After an initial weight gain during a 3 week period between camp and club phases (percent BMI SDS reduction down to 10%), participants were able to recover and maintain their post camp levels of BMI SDS reduction. Correlations on data suggest that the more clubs participants (particularly boys) attended, the more likely they were to lose weight during the club phase (p = .028).
In summary, the intervention camp was effective in significantly reducing the weight of all participants, despite its short duration of 11 days. After school clubs showed effectiveness in maintaining or further enhancing weight loss achieved in the camp and in engaging parents. The synergistic effect of the camp and after school/community clubs suggests promising potential for successful incorporation of this integrated intervention into the school curriculum, especially since the camp occurs during mid-year school break and the after school clubs during school days. The succeeding cohorts will provide further data for validation of this potential. The one year follow up data are being collected to assess the durability of weight changes and the stickiness of behavioral changes induced by the different phases of this intervention.
-
-
-
New Lights on Mamluk Cartouches and Blazons Displayed in the Museum of Islamic Arts, Doha: An Art Historic Study
More LessThe Mamluk dynasty ruled in Egypt and Syria from the overthrow of the Ayyubid dynasty in 1250 until the Ottoman conquest in 1517. The Mamluk sultanate developed a system of pictorial heraldic blazons and inscribed cartouches bearing the sultans' names alongside with mottos, epithets and blessings dedicated to them. One of the well-known pictorial blazons of Mamluk sultans is the panther in the act of walking. It's attributed to the Sultan Baybars
al-Bunduqdārī (r. 1260–1277). According to the chronicle of Ibn Iyās (d. 1522), “… Baybars attained the panther (sab‘) as emblem representing his equestrian and extreme power …”. He used, for instance, to depict it in architecture and objects made of various materials as well as on his own coins. Two masterpieces attributed to Baybars are displayed in the Museum of Islamic arts (Doha). One of the museum objects is a bronze candle stick inscribed by a eagle as a personal emblem of the Mamluk sultan Muḥammad Ibn Qalā'ūn (reigned three times; 1293–1294, 1200–1309, and 1309–1341). It appears in two varieties – one-headed and the two-headed – and features both on one or two-fielded shields, and at times even without a shield.
In addition to the pictorial emblems, Mamluk sultans had also their own inscribed shields (cartouches) depicted in an oval or circular form. The early shields of this type were simple and started during the early Mamluk period constituting three fields or horizontal stripes, of which the middle one bears the sultan's name. Such shields appear alone or besides emir emblems as seen inscribed in a stick candle in Doha museum of Islamic arts. Sultan emblems developed during the Late Mamluk (Circassian) Burjī period (1382–1517) constituting three fields documenting the sultan's name and epithets accompanied by blessings as mentioned below.
The Mamluk Dynasty developed a heraldic science for emirs as well. Each prominent emir had his own blazon, mostly circular and decorated with the heraldic device reflecting his official post. They inscribed them in their buildings and also depicted them on every possible product dedicated to them, such as vessels, tools and weapons.
The heraldic blazons of the early Mamluks (baḥrī mamluks) were simple and characterized with their circular undivided shield depicting, for instance, the “pen-box” for the post of the sultan's executive secretary (dāwādār) and the “two polo sticks” representing the polo master (jūkandār). In a later period the blazon became divided into three fields (or bars) and occupied in the middle by the heraldic emblem of the emir while the other fields left blank. In the Late (burjī) Mamluk period the emir blazon developed to become composite and occupied by various emblems showing the earliest official post of the emir in the lower field (bar) and ends up with the last one.
In light of his ongoing micro art historic study of Mamluk masterpieces displayed in the museum of Islamic Art, Doha, the presenter endeavors to present the results of the recently examined cartouches of Mamluk sultans’ and heraldic blazons of high ranking emirs and discuss them in historic, art historic and hierarchical contexts.
-
-
-
An Exploratory Study of Teachers' Perceptions of Prosocial Behaviors in Preschool Children
Authors: Yassir Semmar and Tamader Al-ThaniChildren's social development is generally facilitated in the context of the unique, socialization experiences that they encounter at school and at. Such experiences are likely to manifest themselves in prosocial behaviors (e.g., helping, collaborating, and empathizing with peers), or aggressive behaviors (e.g., hitting, bullying, manipulating, rejecting, and teasing). Schools today are fraught with challenging behaviors that lead to stressful and difficult environments for both students and teachers alike. Anecdotal evidence and empirical research point out to the rise of violent and aggressive acts among school-age children. Anecdotal support, from conversations with local school teachers, pre-service teachers' classroom observations, and round-table discussions with both faculty members and students about the rise of children's aggressive behavior acts in schools motivated our study. We believe that it is essential to conduct a study that would help us gain a working familiarity with the extent to which children's prosocial behaviors are present in the preschool classroom. This is critical because students who exhibit antisocial behaviors are even more challenged with the prospect of social competence and academic success as continual conflict is likely to invade their thought processes and disturbs their ability to learn. Therefore, the purpose of this study was to assess the occurrences of prosocial behaviors in preschool children according to the perceptions of their teachers, examine if variations of prosocial behaviors exist among boys and girls, and analyze whether variations of prosocial behaviors exist among children in Kindergarten 1 and Kindergarten 2.
Thirty teachers from different preschool centers in the community participated in the study. They provided information about their perceptions of prosocial behavior of each child in their classes. The instrument that was used in this study is the Prosocial Behaviors of Children Questionnaire, which consists of 19 items. Four subscale scores are calculated by adding individual items: Prosocial Behavior and Social Competence subscale; School Adjustment subscale; Peer Preferred Behavior subscale; and Teacher Preferred Behavior subscale. A high score on any of the four subscales denotes a great amount of prosocial behavior. Teachers were asked to indicate how frequently they observed specific prosocial behaviors in the children of their class, using a 5-poing Likert scale (never, rarely, sometimes, often, and frequently). The questionnaire was translated into Arabic and then back-translated to English. The final “Arabic” version was piloted in the Early Childhood Center at Qatar University with the participation of four preschool teachers. The first part of the study relied on having the teachers complete the prosocial behaviors of children questionnaire. This type of self-reporting measure, which is based on teachers' observations and interactions with their students, is commonly used in early childhood education research. The second phase of this investigation employed a causal comparative design in which the researchers tested whether children's prosocial behaviors are related to gender and age. Causal-comparative methods aim at investigating whether one or more preexisting conditions have possibly caused subsequent differences in the groups of participants. The causal-comparative approach also has the advantage of establishing relationships that might be studied experimentally at later points in time.
-
-
-
Trust in Business-Customer Mutuality Relations as a Model of Social Engineering
More LessThe project of social trust
Social trust, Arrow remarked is a social lubricant. The business world requires financial efficiency and expediency for its decisions to be realized. The matter that surfaces in the relationship between trust and business efficiency and expediency of decisions impinges on the profitability, and sustainability resulting in a two-way causality: The firm benefits from the trust of its customers. Customers earn the trust of the firm in its provision of desired goods, services, and trustworthy relations. An example of such trust between business and customers is the Customer Services Department of most large businesses. In small firms and businesses there is no such organized department. But respect of customer complaints and the earnestness to provide reliable products and services to customers prevail in conscious business ethics, although this is left to an understanding, rather than a formal and legal code.
Objective
In this paper we will formalize the inter-causal relations based on business and customer trust that lead to mutual wellbeing. The objective criterion of the wellbeing function thereby, replaces the profit-maximization or expected utility maximization of the financial firm by the project of social consciousness in the interactive business-customer environment. This replacement is not an imposed criterion on business. It is a logical and ethical one. An example here is the trust that society places on environmental goods, services, their suppliers, and business-customer relations. Customers and lobby groups campaign for green and sustainable futures. Corporations prevail on profit and output maximization and risk diversification. Contrarily, the objective criterion of small and medium size firms and microenterprises rests upon the consciousness of trust rather than costly customer service department. The small firms do not undergo corporate governance and corporate social responsibility strictures. The response in these directions is inherent in the type of business they do. Thus social trust between business and customers on dynamic ethical effects in preferences and objective functions is based on the process of learning within the complementary outlook of business and customers on the basis of mutual trust.
Methodological formalism
The formalism of complementary relations in mutual trust is explained by circular causation relations. This kind of methodology can be represented by the following type of schema. This schema will be formalized and evaluated to bring out the empirical validity of the ensuing decision making premised on mutual social trust. Note that the circular causation implies the need for sustainability of ethical values, as by the impact of trust, and this to be sustained by its reproduction in the business-customer relations. Thereby the project of ethical consciousness takes its functional roots in preferences, conduct, and interactive relations. A brief explanation of the circular causation method is provided by Fig. 1. It is to be developed extensively on details in the full paper for oral presentation. Significance of the proposition The global significance of the proposed paper is in its development of extensive and transcultural organizational behaviour based on the ethics of consciousness. This is studied in an institutional and academic framework. Students and scholars in every discipline need to develop the theory of endogenous business and social ethics that is presented in the paper. Practitioners and public authority are in need of realizing the inherent ethics of quality and stability of goods, services, durability, sustainability and prices that they offer with empathy beyond the old understanding of demand the supply mechanism. Such social reconstruction generates and sustains good ethical governance between multiplicities of transacting agents, and transforms society at large. Good conscious corporate social responsibility prevails in the measured implications of measured wellbeing as the social index. In a world of financial and social uncertainties today it is not so much the policed and regulated social environment that is appropriate. Rather, much needed today is the development and sustainability of conscious ethical conduct through human and institutional interactive conscious preferences. This will be the academic and practical message of this paper with its empirical and organizational content.
Figure 1: Circular Causation by Complementary Relations of Mutual Trust T qÎ(W,S) B(q,x(q)) C((q,y(q)) SOC((q,z(q)).
It is evident from Fig. 1 that the vectors induced by the trust-value, q, (x,y,z)[q], generate a complex system of relations if their interdependencies were to be charted by any system method. Instead therefore, the circular causation model is used. This approach simplifies the complexity of the system method. Details will be shown in the paper for academic and practical guidance. The vector (x,y,z)[q] will be given types of variables belonging to (B,C,SOC) respectively, and their interactions defined. The set (W,S) forms a topology on the universal set of ethical values, W, by the functional ontological mapping S. The objective function to simulate is stated in terms of the wellbeing function, Simulate W = W(x,y,z)[q], subject to the circular causation (double arrows in Fig. 1) inter-variable relations between the variables in terms of their explained attributes of ethicality underlying trust signified by the value of complementary mutuality. Now two analyses are launched and explained: Firstly, an explanation of the above simulation problem is done by using the idea of risk and return being replaced by spreading risk diversification over social goods and services, and increasing number of stakeholders, as in the case of development of SMEs and microenterprises to increase mutual wellbeing by trust. Trust is thus a conscious cementing (complementary) value of mutuality of interest and satisfaction spanning the vectors (x,y,z)[q] through circular causation as the trust factor. Secondly, a non-parametric Spatial Domain Analysis is displayed to further bringing out the simulation results of social trust generating and being generated recursively by the inter-causal variables of the vector (x,y,z)[q]. The circular causation and simulation perspective of the wellbeing function are thus brought together by epistemic economic explanation and its application in visual form. The methodology will be of a combination of ethical dynamics and financial economics and analysis. The paper is expected to be approximately 3000 words long.
Principal references for this abstract
Choudhury, M.A. & Hossain, M.S. 2007. Computing Reality, Aoishima Research Institute: Tokyo, Japan. Hammond, P.J. 1989. “On reconciling Arrow's theory of social choice with Harsanyi's Fundamental Utilitarianism”, in G.R. Feiwel Ed. Arrow and the Foundation of the Theory of Economic Policy, pp. 179–221, Macmillan, London, Eng.
Kim, W.C. & Mauborgne, R. 2005. “Creating blue oceans”, in their Blue Ocean Strategy, pp. 3–22, Harvard Business School Press, Boston.
MA. Lawson, A. 1997. Economics & Reality, Routledge, London, Eng. Stiglitz, J. Sen, A. & Fittousi, J-P. 2010. Measuring Our Lives, the report of the Commission on the Measurement of Economic Performance and Social Progress, The New Press, New York.
-
-
-
Assessment of Student Learning Outcomes for Assurance of Learning at Qatar University
Authors: Khaled A Daoud and Shaikha Jabor Al-ThaniFrom 2006 to 2012 Qatar University transitioned from doing no program level outcomes-based assessment of student learning to implementation of a robust, effective, and institutionally pervasive Student Learning Outcomes Assessment System (SLOAS) that is characterized by a high level of compliance and meaningful improvements to both learning and assessment processes. Keys to the success of the implementation have been support from campus leadership, creation of a structure and processes that support assessment at all levels, and an intensive program of faculty development and faculty incentives. A unique feature of the system is the auditing of annual program assessment reports by external experts. Comparison of results from the fourth and fifth years of the implementation suggest the following trends: a relatively high and increasing tendency to identify learning improvements involving revisions of curriculum and courses, a low and decreasing tendency to identify learning improvements that cost money, and a high and increasing tendency to make changes to assessment processes that make them more meaningful and more manageable.
-
-
-
Death Penalty between Divine Law and Secular Law: Egyptian Criminal Justice System and Counter-Terrorism Law, Quo Vadis?
More LessMuslims, Christians, and Jews advocates of reconciliation's theological concept, and in contemporary legal perspectives and politics, it encompasses acknowledgment (truth commissions, memorials…), compensations, apology, occasionally retribution (punishment), via a unique restorative logic, to rectify wounds and alter hatred. Understanding that Islam plays a crucial role in law and politics in the Middle East, as it includes Islamic legal basis for the application of criminal punishment, especially death penalty, as the Prophet Mohammad said: “[I]f a relative of anyone is killed, or if he suffers khabl (wound), he may choose one of three things: he may retaliate, or forgive, or receive compensation.” Justice plays a dominant theme in the Qur'an as represents one of the Islam's main purposes. In terms of retributive justice, Muslim fiqh (scholars) splits crimes and punishments into three categories: Hudud are prescribed offences cover specific acts (e.g., theft, adultery, slander…), Qisas means retaliation for murder, wounding, and mutilation and for community's improvement, and ta‘zir includes minor misbehaviors, crimes for which retribution is improper (or impossible), and offences not cited in the Qur'an and don't have any fixed penalties as hudud and qisas, which administered at the qadi (judge)’s discretion. Under the Egyptian criminal justice system, and according to Egyptian Criminal Code, the country's attorney general along with the defendants have the option to spontaneously appeals death penalties to the Supreme Court, which can order a retrial and if the retrial results in the same ruling, the defense attorney may again ask the court to grant a retrial procedure. According to Article 2 of the Egyptian Constitution 2014, “Islam is the State's religion…and the principles of the Sharie‘a is the principal source of legislation.” In light of this provision's interpretation, the law of God requires that intentional and serious criminals be put to death which means the lex talionis (equality principle) through satisfying the victims’ feelings and then social peace will maintain. Classical Islamic scholars argued that Islamic norms are immutable, based on the Supreme Constitutional Courts' decision on the interpretation of the Sharie‘a values. However, the court believes that the Sharie‘a law include “relative” philosophies and “updated or modern” doctrines which are capable of being adjusted within the social future development through ijtihad (individual reasoning) and Qiyyass (analogy) and without any paradox to the main maqasid (objectives/bulk) of the Islamic jurisprudence. In this domain, the most conventional religious jurists go as far as to claim the renovation of the death penalty for all crimes specified in the Qur'an and others moderate Islamic intellectuals argued for the restoration of the diyyahh whereby criminals can be (forgiven) whereby criminals can be pardoned by their victim's family by giving them compensation. Egypt's Constitution stipulates that all those accused of a criminal offense are “presumed innocent until proven guilty in a fair legal trial in which the right to defend oneself is guaranteed.” The Constitution does not refer to the corporal punishment but confirmed a certain number of guarantees concerning the respect of individual public rights and freedoms. The Penal Code sets this punishment for various crimes. Crimes of this punishment are tried by the criminal circuits of the Appellate courts in which the criminal rules does not offer a fair system of reasonable administration of justice which constitutes a breach of the UN Safeguards ensuring defense of the rights of those facing the death sentence. The Penal law obliged the court to pass the case file to the Mufti (religious leader) for his opinion, before pronouncing this sentence decision to make sure if it is compatible with Islamic law rules or not. In Egyptian law, execution can be postponed by retrial's request, as the right to demand a retrial belongs to the prosecution or the defendant. As a question on the Sharie‘a on the death penalty eradication, and based on the constitutional's moderate interpretation of Islamic norms, as Islam should familiarize to the fluctuations which have come about since the Prophet's period, addressing Talion Law is an outdated practice which should be swapped by the legislature and the judiciary to end up the debate on death penalty not only in Egypt but also in the Islamic World. For decades, reprisal no longer institutes the basis for punishment, as any development appears to aggregate law's secularization, the purpose of which is to isolate the Prince's law from God's law. Regrettably, the rise in various forms of fundamentalism is not favorable to this expansion.
-
-
-
Higher Education in Pakistan - Problems and Prospects in Post 18th Amendment
Authors: Amber Osman and Muhammad Imtiaz SubhaniThis multi-dimensional study will be focusing on investigating the evolution of constitutional position enjoyed by Higher Education while identifying the issues with regard to planning and governance of Higher Education in post 18th amendment scenario. This study will also be emphasizing on governance of higher education at provincial level. Moreover, this research will be giving the recommendations for future course of action in connection with the position of subject of education in the constitution via quantifying the various propositions, which includes: 1- There is the evolution of the constitutional position enjoyed by the subject of higher education. 2- There are the issues with regard to planning and governance of higher education after 18th amendment. 3- There is the moment in the governance of higher education at provincial level. 4- The future of administration of higher education at provincial level is predictable. 5- there is the need for planning and monitoring of higher education at national level. 6-There the need for a body with advocate powers at federal level to interact with international donors for educational funding. 7-There is the need for a body with advocate power to meet the requirements of educational institutions abroad seeking verification and equivalence of educational qualification. 8: There is a bubble in access to higher education in Pakistan. 9: There is gender discrimination in access to higher education in Pakistan. 10: There is no shock in the spending of higher education in Pakistan. 11: There is contribution of universities of Pakistan in generating income for higher education and economy. 12: There is contribution of universities in generating foreign exchanges earnings. 13: Forecasting the performance of public and private sector universities during the period of post 18th amendment in terms of numbers of enrollments, contribution to national income and contribution to Foreign Exchange Earning. 14: There is variance in the perception of public sector Vice Chancellors and private sector Vice Chancellors in connection with the issues raised in the 18th constitutional amendment.
The literacy rate of Pakistan is below average, which describes the education system of our country. The education system of Pakistan requires a complete overhauling from its roots to the top in order to really provide quality education. Higher education is generally expensive for the masses and more so if you're sending your child to private universities. The public and private sector universities do provide scholarships and financial-aid but still the present situation of the higher education requires more and massive improvement. Is the curriculum in public and private sector universities uniform and even if a masters student tries to attain a foreign degree, there are obstacles of bridge courses for him/her to complete first and then to start for the actual degree he/she applies for. All these matters load the individual with monetary burden and in turn effect the growth performance as a nation. With the 18th amendment in place, the government and the universities are trying to take steps to provide quality higher education in Pakistan but still the question of access to higher education to masses, more awareness of the policies of education system remains unanswered. Pakistan has already decided on free and compulsory education for children between 5–16 yrs in the 18th constitutional amendment. Still, children are still not being educated and the education system remains disturbingly poor. The current public and private higher education system requires producing strong youth who can compete in the growing international market with supportive local government system within the country and tranquility at the international level. Actions speak louder than words; This is what Pakistan should do. It is necessary for Pakistan to make substantive investments in higher education, training and curriculum to meet national needs. This will empower our country to produce all-rounded citizens, ready to support the country's economic, political, social, technological, defense needs, partnership for self-defense while safeguarding our religious, cultural heritage and geographical identity. The underlying problems to be explored in this research grant are: 1- Gauging the evolution of the constitutional position enjoyed by the subject of higher education. 2- Identifying the issues with regard to planning and governance of higher education after 18th amendment. 3-(A) Analyzing the governance of higher education at provincial level and (B) forecasting the future of administration of higher education at provincial level. 4- Identifying the need for planning and monitoring of higher education at national level. 5- Identifying the need for a body with advocate powers at federal level to interact with international donors for educational funding. 6- Identifying the need for a body with advocate power to meet the requirements of educational institutions abroad seeking verification and equivalence of educational qualification. 7- The access to higher education. 8- Gender discrimination. 9- Spending on higher education. 10- Sources of generating additional funding. 11- The potential of higher education as a foreign exchange earner. 12- The public and private sector in higher education and its potential. 13- Survey of Vice Chancellors both public and private in connection with the issues raised by the 18th Amendment. The foremost expected benefit on this research sub-theme will help one understand the evolution of the constitutional position enjoyed by the subject of education with particular reference to Higher Education. Identification about the issues, which have come up with regard to the planning and governance of higher education after the 18th Amendment. To explore provincial level role in regards to higher education. Also, the need for planning, monitoring and unit at federal level to interact with international donors in order to facilitate and meet the necessities of educational institutions abroad seeking verification and equivalence of educational qualifications. The major benefit and impact will be achieving future implications of few changes brought about or proposed by the Provincial governments with relation to the higher education administration and to foresee the plans, directions and outcomes by the Federal and Provincial levels, which might entail additional modifications in subject to education in the Constitution. This research study shall be beneficial to the higher education policy makers, public and private higher education institutions, faculty and students of higher education, researchers and all those who are keen to embrace themselves with the post 18th amendment outcomes in relevance to the higher education of Pakistan. The positive and/or negative impacts via various indicators of Higher Education in Pakistan will be forecasted through econometrical techniques to present the post 18th amendment and the future of Higher Education.
-
-
-
Degree of Sustainability Disclosure and its Impact on Performance of Islamic and Conventional Banks
Authors: Haitham Nobanee and Nejla ElliliThis paper examines the degree of sustainability disclosure and its impact on profitability of listed banks in the UAE financial markets during the period 2003–2013. The results show that the level of sustainability, economic, environmental, and social disclosures are at low levels for all UAE banks, Islamic, and conventional banks. The results show significant differences in social disclosures between Islamic and conventional banks and insignificant differences of sustainability, economic, and environmental disclosures between the two banking systems. In addition, the results of the dynamic panel data reveal that sustainability, economic, environmental, and social disclosures have no significant effects on the banking performance of all UAE banks, conventional and Islamic banks.
-
-
-
“Option Contracts” in the Light of Islamic Jurisprudence: Comparative Study
More LessThis research aims to provide an Islamic Fiqhi perspective on the Options Contract, one of the most prominent contracts in international capital markets. The research begins with a historical introduction that illustrates the emergence of this contract and its evolution. It then moves on to explain its terminology and linguistic meanings. The research also addresses the most suitable Fiqhi adaptation (Takyeef) to this contract. The research concludes with many significant findings. Firstly, the subject of contracting is an abstract right, namely, just the right to purchase or to sell, and not something else. It is not an obligation, because an obligation is only a consequence of this contract. Finally, all kinds of this contract are haram because it contains gharar and maysir (gambling).
-
-
-
البناء العاملي لعلاقة كفاءات تسيير المؤسسة التعليمية بجودة الأداء المدرسي دراسة مقومات تنمية كفاءات التسيير بمنهجية النمذجة بالمعادلة البنائيةSEM
More Lessالبناء العاملي لعلاقة كفاءات تسيير المؤسسة التعليمية بجودة الأداء المدرسي دراسة مقومات تنمية كفاءات التسيير بمنهجية النمذجة بالمعادلة البنائية SEM تناولت الدراسة موضوع تنمية كفاءات تسيير المدرسة الجزائرية المعاصرة، باعتبار القوى البشرية الإدارية في أي مؤسسة تمثل أهمية قصوى،وبالتالي يمثل موضوع تنمية كفاءات التسيير والقيادة مطلبا ضروريا للدراسة والتحليل بحثاً عن السبل الكفيلة بتحقيق ذلك. وحيث أن المدرسة بجميع مستوياتها تمثل عصب النظام التعليمي، فإن إدارتها وتسييرها تمثل عملية إنتاجية على قدر كبير من الأهمية تستدعي تنمية كفاءات تسييرها وإدارتها ، ولأهمية دور المؤسسة التعليمية في التنمية المجتمعية المعاصرة فإن القائمين على أمرها مطالبون بامتلاك الكفاءات اللازمة والمؤهلة لتطوير المعارف و فهم متغيرات العصر من حولهم والتفاعل بإيجابية مع متطلبات تحقيق الأهداف المرسومة. الأمر الذي يؤكد أهمية دراسة وإعادة النظر في أنظمة التكوين والتدريب بما يمكن من رسم وتحديد المقومات الداعمة لعملية تكوين وتدريب تسمح للمسيرين بمواجهة تحديات الاستمرار والتطوير ضمن المتغيرات الثقافية والاجتماعية والاقتصادية والبيئية، وبالتالي نجاح مخرجات المدرسة وتجويد أدائها. من هذا المنطلق استهدفت الدراسة استقراء واقع تسيير المؤسسات التعليمية.... بهدف الوقوف على المقومات الضرورية لتكوين ناجع يسهم في إحداث التنمية المهنية للجهاز الإداري. بشكل يواكب الإصلاحات ويستجيب للمتطلبات المعاصرة لتسيير المدرسة الحديثة وفقا لعناصر بيئتها الثقافية من جهة، وتماشيا مع نظام الجودة الشاملة. وتكون مجتمع الدراسة من مدراء المؤسسات التعليمية بالمستويات الثلاثة بواقع عينة بلغ حجمها 315 مديرا موزعين على المستويات: ابتدائي ـ متوسط ـ ثانوي، خلال السنة الدراسية .2015/2014 وأسفرت النتائج المستقاة من الدراسة بعد معالجتها بالأدوات الإحصائية المناسبة، ومنهجية النمذجة بالمعادلات البنائية (SEM) STRUCTURAL EQUATION MODELING على تحديد النموذج النظري البنائي المتضمن لجملة المقومات اللازمة لتنمية كفاءات الجهاز الإداري في مجال تسيير وإدارة مدرسة معاصرة لمجتمعها، وفاعلة في محيطها، ومن ثمة وضع واقتراح مصفوفة اشتقاق الكفاءات المعنية بالتنمية المستهدفة.الكلمات المفتاحية: تنمية الكفاءات ـ المقومات ـ التسيير الفعال ـ الثقافة المحلية ـ جودة الأداء المدرسي
Title: The elements of skills development for the management of education and training institutions, in the light of the local culture and norms of the TQM (STRUCTURAL EQUATION MODELING)
Summary: The study focused on the development of skills related to the management of educational institutions, since administrative and human resources in any institution are one of the influential factors on its development and the achievement of its objectives. Therefore, research on skills management becomes more than necessary to enable educational institutions to achieve their goals. If we assume that contemporary schools, in different stages of education, represent the core of the educational system, their efficient administrative management is conditioned by the acquisition of a number of skills necessary to perform the roles defined by the requirements of modern times. That is why, continued calling into question of training strategies is imperative, given the continuing evolution in the field of administrative management of educational institutions. And it is for this reason that the current study focused on the observation of the reality related to the administrative management of these establishments, aiming to determine the elements of successful and effective training of the managers and directors of these educational institutions, and to ensure the practice of modern management that takes into account the elements of authentic cultural environment that surrounds these establishments, in addition to ensuring the spirit of competition that leads to overall quality covered by this management. The population concerned by this study is composed of directors of educational institutions. Each sample includes 315 directors, at 03 levels: primary, middle and secondary, for the school year 2014/2015. The study was able to determine a set of components necessary to acquire skills related to the administrative management of the modern school, and also to provide a training system capable of achieving the goals of sustainable social development.
Keywords
Skills, Skills Development, Constituent Elements, Management of Educational Institutions, Local Culture, TQM.
-
-
-
Road Traffic Accidents in Kuwait (Triangulation Method Study)
More LessThe study has revealed the pattern and trends of motor traffic accidents in Kuwait City from 2010 to 2011. It shows that the accident occurrence was increasing every year, passengers and pedestrians are always at highest risk of being injured or killed on the road, young males are highly prone to motor traffic accidents. The study has also identified qualitatively (by interviews) that the technical element of the highway construction, irresponsibility, poor management, cell phones, alcohol and drugs, age of the victims and poor condition of services as the important risk factors associating to the cause of traffic accidents in Kuwait. In order to reduce traffic accidents in Kuwait City, it is recommended that the government should review legislation regarding employment of drivers. Working conditions of police force should be improved, public road safety campaigns should be conducted, and new driving license system should be imposed. The use of cell phones while driving should be restricted. The hospital and police record keeping should be strengthened, the hospital staff, traffic police and ambulance personnel should be considered for intensive training on emergency and preparedness, and regular vehicle inspection should be introduced in Kuwait City.
-
-
-
Translating Conversational Implicature from English into Arabic
More LessConversational implicature is known as an additional meaning indirectly implicated by saying another thing. In this sense, the aim of this thesis is to discuss the problems of translating conversational implicature from English into Arabic. It is concerned with the conversations between characters selected from three English literary works, two novels Lord of the Flies and Nineteen Eighty-Four both of which are written in prose for analysis along with their Arabic translations. In order to determine how to resolve the problems of translating conversational implicature from English in to Arabic, two theoretical frameworks are implemented for the descriptive analysis of the selected texts. The first is the Skopos approach that concentrates on the purpose of the translation which in turn determines the methods and strategies of translation that are employed to form a functional translation of the target text. The second is Grice's Implicature that implicitly agrees on the “purpose or direction” of those conversations in which each participant (speaker and listener) cooperates to achieve the purpose of the conversation. These two theories, along with their rules, provide appropriate standards by which to measure the accuracy of such translations from English language into Arabic. The study's descriptive analyses reveal that the translators encountered problems and obstacles during the translation of those texts into Arabic for several reasons, including linguistic, social and cultural. To overcome these problems, the translators followed different approaches and techniques to achieve consistent coherent Arabic text, equivalent to that of the original. Most of the source texts are translated into Arabic adequately enough on the whole in spite of breaches to the rules and maxims of translation. In conclusion, the study reveals that both the Skopos and Grice theories are successful and applicable at varying levels, in translating conversational implicature from English into Arabic. Nevertheless, Grice's approach is more successful in translating the conversational imlicatures within the framework of this study. Accordingly, this study answers all the designed questions.
-
-
-
Digitally Reconstructing and Analysing Historic Villages in Qatar
More LessThis paper analyses the Computer-Aided Design (CAD) reconstruction of four abandoned villages in northwestern Qatar, Al Ghariya, Al Jumail, Al Khowir, and Al Areesh, using Building Information Modelling (BIM) techniques developed for contemporary architecture to visualise and understand how the villages functioned and were organised. In addition, we isolate specific buildings and analyze their environmental performance through light and shadow studies, thermal performance of the walls and interior spaces, and wind simulations. Through this interrogative process, we are quantitatively exploring the sustainability of traditional building practices and understanding the underlying geometric logic of spatial organization in historic Qatari architecture. This research is an extension of a reconstruction and analysis of Al Jumail I co-published in 2013 in the Proceedings of the Seminar for Arabian Studies. The additional village models allow for comments on common patterns and individualisation in village and house construction in Qatar prior to oil and gas development. I also identify the innovative ways in which Qatari people dealt with their environment prior to industrialisation that might be integrated into sensitive regional design today. The analyses focus on the ephemeral qualities of these villages to seek out deeper structures and meaning in the organisation of historic Qatari villages – particularly the inter-relationships between what and where people built and how they lived their everyday lives. Other elements, like the location and storage of water, and proximity to economic resources, such as inter-tidal fish traps, are related back to the BIM analyses and primary and secondary qualitative resources on Islamic architecture and urban design. The mapping and reconstruction of each village is based on a combination of photographic documentation I completed when I was in Qatar in 2013, GIS data from Google Earth, and existing AutoCAD plans of each village. I analyse the two-dimensional plans using DepthMap, a spatial syntax visualisation and analysis program. Through this, I map not only the buildings, but also the primary and secondary arterial routes through each village as well as the geometric relationships between buildings. This allows us to identify structures with higher or lower accessibility and relate these to qualitative data on who lived where and did what within the village. It also measures the rate of penetrability of each structure, which sheds light on how architecture embodies Islamic concepts. One of the things that Besim Selim Hakim discusses in detail in his book, Arabic-Islamic Cities and Planning Principles, is the emergence of planning and house models in conjunction with Islamic jurisprudence and the Madhab, or schools of law. Professor Hakim used Tunis and its development in the 8th century according to 12 principles of Maliki law, much of which revolve around notions of maintaining privacy while allowing public access through the urban core. In our first analysis of Jumayl, we discovered that the highest level of public accessibility, the main public roads are clear and run through the town, radiating out from the central public suq and the community mosque. Secondary roads away from the suq are the second order, with the high walls of the individual houses blocking visibility and accessibility into the private residential spaces. Access to the courtyard of an individual house represents the third order. The access to the internal residence blocks (which also exhibit a subtle hierarchy of spatial access as there can be multiple smaller buildings arranged around the courtyard) is the final and most private order. The spatial syntax analysis software visualises a hierarchy of space and spatial permeability. In other words, there are four orders or levels in a continuum of privacy versus public accessibility that are constructed through the courtyard walls, avenues, and building structure and placement. At play are alternating lines of vision and occlusion that reinforce the Islamic notions of gender, home, function and the organization of public and private spaces. One of the main remaining questions I have, though, is how might the topology - what might be described as the “ambient” land in which a building is placed - impact some of these design decisions. I'm also deeply interested in Harriet Nash's work on how stars were used to determine time and the distribution of water in Oman. What's particularly fascinating to me about Nash's work is her identification of a central nexus along the main falaj (or irrigation canal) from its source in the nearby mountains and how this nexus has a specific site line for stargazing built in. In other words, the arrangement of space that begins with all the elements described by Professor Hakim also includes consideration for all these other ephemeral qualities – including the relationship of the individual to the larger cosmos. I know from my other research in different parts of the Gulf that folk astronomy played a big role in weather prediction and the scheduling of economic events, including both maritime and terrestrial navigation. None of this has been documented for Qatar, so it would be interesting to include an astronomic simulation in the digital analysis to see if there are also star-gazing sites in these primarily fishing villages. I plan to import both the 2d plans and the terrain data into a CAD software and then extrude the 2-dimensional plan into 3 dimensions, matching the models carefully with my own database of geo-located photographs and site notes. The resulting models can then be subjected to a suite of BIM analyses that visualises how the buildings performed in different seasons using recorded climactic data on heat, solar movement, wind, and tides. By correlating the results with pre-existing ethnographic data, the analysis illuminates the ways in which social hierarchy was materially and spatially expressed according to Madhab. The results complement and expand upon the existing literature of Gulf architectural history, which has emphasized the use of passive cooling or visualising the typologies of individual house, but rarely explores the range of these strategies within the context of the buildings’ location and orientation. It also allows us to grasp the complexity and diversity of building and settlement typologies and offers a set of methodologies applicable to the analysis of archaeologically recovered structures and towns. This is particularly relevant to Qatar, given that the historic built environment isn't as comprehensively documented as some of the other countries in the Gulf and that access to water on the peninsula was more limited.
-
-
-
The Cultural Innovation Sub-system
More LessThis is a paper in progress, which deals with the progress of the concept of the National Innovation System, from a perspective of developing countries (particularly in the Arab Region), keeping in mind the different aspects of entrepreneurship based on the contributions of Schumpeterian economics. It examines three possible focus areas for NISs, including technological (The Research & Development/Science & Technology approach to innovation), non-technological (Innovation methods other than product innovation), and Cultural/Creative Industries. We argue that in the context of some developing countries with inadequate R&D infrastructures and economic development, the Cultural-Creative industries perspective can be more effective in strengthening innovation. We present three lenses that can help shed the light on the contribution of cultural and creative industries to innovation in these contexts. This paper makes a number of propositions, related to the effectiveness of R&D spending aiming to improve innovation in developing countries, and comparing the impact of supporting innovation through cultural industries versus R&D in the same context. We then propose certain expected areas where investments in cultural industries can strengthen innovation. The view taken is argumentatively opposed to some existing entrenched beliefs about the value of technical innovation, R&D efforts, and registering patents, particularly within a context lacking suitable network, scientific, and industrial infrastructures. The theoretical propositions are not - as of yet - supported by hard economic data due to the difficulty of capturing the construct and lack of suitable statistics, but the work is in the process of being refined and adapted. Presenting the theoretical contributions at this conference presents a number of synergies due to the high value placed on cultural resources, innovation, and the need for sustainable economic development. The argument will help spark a fruitful discussion on the propositions and refine the analysis related to building robust and powerful innovation systems in the region, supported by significant and solid cultural foundations.
-
-
-
Livability of High-Rise Districts - Case Study of West Bay in Doha
More LessDoha, the capital of the state of Qatar, is a small Gulf city that grew as a port settlement on pearling and fishing activities. Since the mid-seventies, Doha has begun the process of accelerated growth based on the rising price of oil. The city witnessed a massive urban transformation in 2005 that continues to the present day. Doha is scheduled to host the FIFA World Cup 2022. Consequently, a number of significant projects and infrastructure works are being undertaken and will continue until the event's launch. Designers and planners usually focus on design merits of tall buildings and the impact on the skyline and the city image, discarding the integration of the building with the ground level. In West Bay, tall buildings meet the ground level with security gates and parking spaces that weaken the buildings' approaches and diminish the vitality of the street. The distribution of land uses complicates the accessibility for people. Insufficient parking spaces and lack of transportation choices exacerbate the traffic congestion problem and reduce the number of visitors to the area. Additionally, West Bay has a non-utilized waterfront allocated for embassies that prevents the entire area from enjoying this interesting waterfront. The current situation in the study area is a result of rapid urbanization, globalization and a non-integrated urban planning process that pressured urban designers and planners to overlook the importance of livability of urban spaces. Livability can be defined as a group of factors that together contribute to enhancing the quality of life and the experience of urban spaces. When governments take into account livability factors in the legislative framework and planning process of the city, the impact on human well-being is significant. Cities from around the world that have integrated livability principles into their regulatory framework have succeeded in having 24-hour bustling high-rise district such as San Francisco, Vancouver, Beijing, and many more. This research investigates the livability of high-rise districts, focusing on the West Bay of Doha as a case study. The study explores the implementation of livability principles through both urban legislation and the urban planning process for high-rise districts in the existing literature. It analyzes a series of case studies from Europe, North America, Asia and the Gulf that are considered to be best practices of livability. The case studies cover all aspects of the research problem and propose best solutions and strategies for a more livable urban spaces. The results of the comparative analysis of case studies produces solutions that have been adapted to solve the problems of livability in West Bay. The site analysis is conducted using data collection tools to: 1. investigate the study area; 2. identify the absence or presence of livability indicators; 3. assess the problems caused by the tall buildings interface with street level; 4. identify the government's plans for developing the study area; 5. and explore people's general perceptions and knowledge of livability. A walk through, social media discussions, interviews, and focused groups were undertaken to formulate an in-depth investigation regarding both the problem and the proposed approach. The study develops and proposes an approach to solve the problem of livability in the West Bay high-rise district and future high-rise developments in Doha. This approach includes a legislative framework for high-rise districts that adopts livability principals within an integrated urban planning process using form-based codes and 3d visualizations which will eventually contribute to the human well-being and overall sustainability and welfare of Qatar. The research investigates the livability of high-rise districts, focusing on West Bay tall buildings as a case study. Tall buildings in West Bay meet the street level with security gates and parking lots that affect both the accessibility and the approach to the buildings. Insufficient parking spaces along with the lack of public transportation choices frustrate people and exacerbate the traffic congestion in the study area. The lack of services and amenities within the residential towers accompanied with poor pedestrian circulation make it hard to perform everyday activities. The current situation has been formulated as a result of uncontrolled globalization and rapid urbanization that have required high-rise building typology as a prerequisite for further development of the country. The development of West Bay was focused on the design qualities of tall buildings, ignoring their integration at the street level which have resulted in having a public realm that does not support daily activities and needs of people. To solve the problem, the research suggests the need to have a framework of regulations that adopt livability principles within an integrated urban planning process and a shift from conventional codes to form-based codes. The following are different hypothesis derived from the research problem: Main problem: Livability of high-rise districts.
Hypothesis 1: The need for a regulatory framework that adopts livability principles.
Hypothesis 2: The need for an integrated urban planning process that adopts livability principles.
Hypothesis 3: The need for shifting from conventional codes to form-based codes Tall buildings in West Bay of Doha are designed to meet the ground level with security gates and parking lots that affect the livability of the area.
To investigate the problem and test the hypothesis, a literature review was conducted in four main subject areas:
1. Livability: generally exploring livability definitions and principles and tackling other specific issues such as: - Livability in high-rise districts, - Integration of livability principles with regulations. - Integrated UPP that fosters livability principals for high-rise developments.
2. Tall Buildings: investigating advantages and disadvantages of tall buildings, impact on surrounding urban space and how to overcome and mitigate the negative impact through regulations.
3. Integrated Urban Planning Process: exploring its definitions, components and benefits.
4. Form-Based Codes: identifying importance, difference from conventional codes and their effect on the quality of public realm.
The conducted literature review revealed the need to propose a legislative framework that fosters livability principles. This legislative framework requires two essential components; an integrated urban planning process and a contemporary version of urban design and planning codes. First, an integrated urban planning process that includes all possible stakeholders along with community engagement in a process of complex and collaborative communication. Second, a contemporary type of codes that is presented in both text, illustrations and utilizes the building form as a main organizing element. Form-based codes is an example that have proved its capability to visualize the resulting space before its establishment. On this basis, the research analyzed eight case studies of best practices from Europe, North America, Asia and the Gulf to cover different aspects of the problem.
-
-
-
Reasons for Participation in Household Surveys in the Arab Gulf Countries, Qatar
Authors: Elmogiera Elawad and Mohammed Bala AgiedParticipation in public opinion surveys in the State of Qatar is voluntary, and respondents are not offered incentives for participating. Nevertheless, rates of participation in face-to-face surveys conducted by the Social and Economic Survey Research Institute (SESRI) of Qatar University remain at levels that far exceed those observed in Western and even other Middle East contexts. This study aims to understand why Qatari citizens participate in household surveys conducted in the country. Using data from a 2015 survey of 823 Qatari households, we examine the reasons underlying individuals’ decisions about participation in surveys.
-
-
-
A Mathematical Model for Space Planning to Minimize Travel Distance
By Li HanIntroduction
The purpose of this research is to investigate how mathematical knowledge can be applied to space planning in design. To begin, it is useful to know how mathematics was utilized in the past to reach creative and logical design solutions. The design process is a multilayered and multifaceted investigation and it aims to find an optimized solution for a defined problem. Since optimization is a mathematical term, it is reasonable to assume many design problems can be solved using mathematical models. Mathematics and design were inseparable from the beginning. In ancient civilizations people build houses and temples facing north. However, without a compass, how would one know which direction is true north? “North” was a decision. Our ancestors decided to call north the direction to which the shortest shadow of the day points (Evans 1998, 28). However, it is difficult to tell where the shadow is shortest. Our ancestors used simple tools such as a stick and a string to draw an arc on the ground. They observed and marked the shadow where it first intersects with the arc and also where it last intersects with the arc. They would draw a line between the points and find the middle point. What our ancestors really cared about was their relationship with the sun and exposure to sunlight. Although it is very easy for a modern person to understand this ancient method, it is not easy to reach this solution independently if the answer has not been given. The orientation of a building is a design problem and it was solved by simple mathematical deductions by our ancestors. A good designer cannot be ignorant about the various aspects of science related to design. For example, design decisions for space planning—location of the fire exit, travel distance, orientation of the building—can be calculated using mathematical models, especially optimization problem solving. A typical question in calculus illustrates how mathematics can be applied to design problem solving (Tsishchanka 2010). A farmer has 2400 ft. of fencing and wants to fence off a rectangular field that borders a straight river. He needs no fence along the river. What are the dimensions of the field that has the largest area? The mathematical model is: Maximize: A = xy Constraint: 2x+y = 2400
Research
Nevertheless, many design problems are more sophisticated. In this research a case study of a space planning project for an extension of an existing out-patient hospital is conducted. The goal of this case study is to find mathematical models for achieving optimal solutions of (1) overall travel distances in one day and (2) numbers and locations of exits. A patient's travel distance is defined as the total distance traveled by the patient inside the hospital from the entrance to the exit. This research began by looking at the importance of the center, shapes of a building and locations of entrances and exits. The objective was to understand how decisions on those design elements affect the total travel distance, and then to identify applicable mathematical formulas to minimize travel distance in one day. In order to apply any mathematical formula the designer first needs to establish the constraints, constants and variables. In this research, the project requirements such as numbers of visitors, numbers of rooms, functions of rooms and hierarchy of spaces and adjacencies are based on similar projects and are hypothetically established. There are two levels in this outpatient hospital. This research focuses on the ground level, in which the designer needs to do space planning to accommodate Internal Medicine, Emergency, Radiology and Imaging, Pharmacy and Intravenous Therapy Departments. Preliminary research revealed that Emergency should have its own entrance that should be close to the parking lot for easy access. Pharmacy, Radiology and Imaging, and Intravenous Therapy are shared functions of other medical departments—especially Pharmacy, which 90% of patients use just before leaving the hospital. 10% of patients from Internal Medicine will need to do an x-ray, CT or MRI with the Imaging Department, while 35% of patients from Emergency will need to use the Imaging Department. The question is where each department should be located to minimize travel. The existing site conditions such as the location of the new entrance, parking location and city traffic are arbitrarily decided to support the research. The project requirements have not themselves been verified for optimization. The arbitrary decisions may not reflect true human needs, but in any case they affect the calculated results. However, they would not affect the validity of this research because it depends on inherent mathematical logic. It is worth noting that the focus of this research is purely on functionality and individual topics. The goal of this research is to offer insight into how simple mathematical formulas could help solve design problems. However, these formulas can never offer a fully-fledged creative design solution, because all the problems are infinitely more sophisticated than a controlled scenario, especially when aesthetics are concerned.
Conclusion
Design is often considered to be an applied art rather than an applied science in today's world. In recent history, designers were separated from engineers and builders. When a beautiful freehanded line can communicate everything on paper, the need of knowing the radius and tangency became less important. In addition, the knowledge required to understand mathematical formulas is often beyond designers’ reach. For example, many designers may not be able to fully comprehend Milutin Milankovitch's arch theory of masonry construction, even though it has significance in civil engineering and architecture (Foce 2008). Designers often are disengaged with rapid developments in science and leave them to civil engineers or mechanics. Nonetheless, this disengagement isolates designers from the rest of the team and makes them less effective. In order to rediscover design, new approaches are needed. This research reestablishes the connection between mathematic and design through space planning, and seeks to offer some different perspectives of what design really is.
References
Evans, James. 1998. The History and Practice of Ancient Astronomy. Oxford University Press.
Foce, Federico. 2008. “Milankovitch's Theorie Der Druckkurven: Good Mechanics for Masonry Architecture.” In Nexus Network Journal, edited by Kim Williams, 9,2:185–209. Nexus Network Journal. Birkhäuser Basel. http://dx.doi.org/10.1007/978-3-7643-8699-3_3.
Tsishchanka, Kiryl. 2010. “Optimization Problems.” Courant Institute of Mathematical Science of NYU. Accessed October 04, 2015. https://cims.nyu.edu/ ∼ kiryl/Calculus/Section_4.5–Optimization%20Problems/Optimization_Problems.pdf
-
-
-
Demystifying Islamic Law
By Faisal KuttyThose teaching in the area of “Islamic law” (however imprecise that term is) can appreciate the difficulty in conveying information on sources and major principles. People unfamiliar with the Arabic language have enough struggles with the new lingo and could benefit from an overview of the main sources, principles and methodologies in pictorial form. This flowchart is a very basic representation or big picture of “Islamic law.” It is best viewed as a digital document where you can adjust the size. It is only an attempt to give some perspective to the introductory student in this area, but nevertheless a caveat is in order at the outset. Any endeavor that attempts to provide a simple overview of a complex system always runs the risk of oversimplification. Clearly, it is an impossible task to set out detailed discussions of the Islamic system's principles, institutions and their interactions and permutations in such a static fashion. For a more detailed and nuanced explanation of most of what is set out on the flowchart, you may wish to download and read Part 5 of Faisal Kutty, The Myth and Reality of Shari'a Courts in Canada: A Delayed Opportunity for the Indigenization of Islamic Legal Rulings, 7 U. St. Thomas Law Journal 559 (2010), available at http://ssrn.com/abstract = 1749046. The idea is to demystify Islamic law and allow for people to see the involvment of human agency in the process with the objective of making it easier to reform Islamic rulings to address contemporary realities and challenges.
-
-
-
Public Versus Private Higher Education in Qatar: Quality, Returns and Matching to the Labor Market
By Rana HendyEducation is the major driver of growth. Recently, special attention is being given to the mismatch between the education system and the skills required and needed by the labor market. The aim of the present research is threefold. First, it assess the higher education system in the state of Qatar through a comparison of the quality of education provided by public versus private universities. The second goal of the research is to understand- using a new micro dataset that is collected for the purpose of this study- the mismatch that exists between the Qatari's labor market and the education that one receives at the university level. Third, this research makes an effort towards understanding the returns of education in Qatar, which help puting the first two objectives of this study into a relatively more complete framework. The collection of a quantitative mico dataset represents an important goal of this study. This dataset follows individuals who graduated from a public or a private university in Qatar to see whether these do participate in the labor market or not and, to what extent their jobs (if working) are matching their field/area of education. The sample is designed to include graduates who have a bachelor degree in one of the three disciplined (biological sciences, computer sciences or business administration). A representative sample of graduates from Qatar University (as the main public/national university in Qatar) on the one hand and, from Carnegie Mellon University (as one of the well-based private universities in Qatar) on the other hand. This quantitative dataset is also complemented with a qualitative one that allows us- through round-table discussions- to ask students about the main challenges they face in entering the labor market as well as their views about the quality of education they receive/ed. The literature on this topic has clearly focused on the quality of higher education institutions but not on how this supposedly high quality education will help graduates once they are in the real world. This missing element is not often thought of when discussing education although it should be. The present research attempts to amend that especially, for the case of case. The outcome data from this project fills a gap in terms of the mismatch between what university education offers and the labor market needs particularly in Qatar. Other studies have focused on other countries from the Middle East and North African region but the present study aims to fully comprehend how these same matters affect Qatar. Although there is research done on education in the context of Qatar, there isn't a lot of exploration neither of the quality of university education nor the labor market mismatch, possibly because the unemployment rate is really low. However, the unemployment rate does not reflect the number of graduates (female graduates in specific) who have decided to go out of the labor force and become inactive only because they believe that they will not find the job that they desire and that matches their education within the Qatari's labor market. Therefore, the unemployment rate in Qatar possibly neglects in its calculation a large part of the population who should rather be called “discouraged unemployed”. Based on my discussions with my Qatari students, the majority of them consider themselves as discouraged rather than unemployed or inactive. To identify the effect of public versus private higher education institution characteristics on the labor market outcomes, we first need to isolate the factors that could have affected the individual's choices between the public and the private education in the first place (correct for this selection bias). For instance, factors as the household wealth may affect the choice between public and private institutions. As an example, while wealthier individuals may select into private higher education institutions because they can afford the costs, the poorer individuals may not. Thus, labor market outcomes in this case may be a function of the individual's socio-economic characteristics rather than the institution they attend. Studying the factors that come into play and where exactly the problem lies will help in providing officials/policy makers with the answers they need in terms of how to solve the issue of mismatch as well as the issue of low participation rates (especially for women). This research will result in yielding answers that will hopefully be turned into policy actions. Research is great on its own but, it is in the best interest of Qatar, considering that the state is moving into a knowledge based economy. This research and the policy recommendations it will produce will help to bridge the ever-present gap between education and the labor market.
-
-
-
أثر برنامج خبرة الابحاث للطلبة الجامعيين على تدريس التاريخ بجامعة قطر
More Lessيهدف البحث الى تبيان أثر نواتج الابحاث الطلابية الممولة من مؤسسة قطر والتي يمكن تطويرها الى تطبيقات علمية لخدمة بقية طلاب الصف الدراسي وعلية فان هذا البحث يهدف الى عرض نموذج تطبيقي لأستاذ المادة والذى يمكن استخدامه في تدريس التاريخ بطريقة تدرب الطالب على التفكير الناقد والاتصال ومهارات التعلم التعاوني من خلال اشراك الطالب في تدريب لطرح تساؤلات بحثية في التاريخ مثل اين؟ وكيف؟ وماذا؟ ولماذا؟ ومن؟ من خلال المصادر الاولية والتي تساهم في تفسير المصدر. ولعل من بين المشاريع البحثية التي اشرفت عليها د. شيرين المنشاوي كباحث رئيسي هو مشروع بحثى بعنوان «قطر كملتقى طرق: طلاب يكتبون تاريخهم الشخصي ≫ UREP08-049-6-004والذى فاز بمنحة ممولة بدورته الثامنة. سعى هذا البحث الى زيادة قدرة الطلاب على التفكير النقدي و تنمية مهارات الكتابة لديهم وذلك عن طريق كتابة تاريخهم الشخصي و ربطة بتاريخ المنطقة وايضا ربطة بأحداث محلية او دولية. وقد تم تقديم نفس التجربة في الفصول الدراسية التي اقوم على تدريسها بقسم العلوم الانسانية بجامعة قطر وخاصة في المقررات المختصة بتعريف التاريخ وكيف يكتب التاريخ، في الفترة ما بين عام 2011 وحتى الان فنادرا ما طلب من طالب من قبل ان يكتب عن تاريخه هو او تاريخ ثقافته او هويته أو حضارته والاحداث التي شكلت تلك الثقافة أو تلك الهوية، ويقوم النشاط على ثلاث منهجيات. المنهج النظري: يقوم الطلبة بتجميع كل المصادر الاولية والدالة على تاريخهم مثل: (1) وثائق: شهادات الميلاد، شهادات المدارس، شهادات تقدير، شهادة زواج الوالدين، أسماء شجرة العائلة، مذكرات، خطابات، ختم على وثقيه السفر، اوراق حكومية، بطاقات سفر... الخ (2) صور: صور شخصية، رسم فنى، صور لحيوانات اليفة...الخ (3) أثار: سيارات قديمة، مباني قديمة مملوكة للعائلة ، ملابس قديمة، اثاث، نقود قديمة، مجوهرات قديمة، ماكينة خياطة قديمة...الخ. ويتبادل الطلاب المعلومات مع عائلتهم بالمنزل لكى يقموا بمساعدتهم بالمصادر التي تدل على حياتهم وحتى يمكن التوثيق لتاريخهم الشخصي. المنهج التطبيقي: أقدم كأستاذ للمادة ورشة عمل للطلاب عن أنواع المصادر الاولية ومدى قوتها او ضعفها وكيفية استخدامها في التاريخ. وكذلك انواع المصادر الثانوية وكيفية التحقق منها و كيفية التوثيق لها. وكيف يمكن الجمع بين المصادر المختلفة لتوثيق تاريخهم الشخصي. وعلية يقوم الطلاب بفحص المصادر الاولية مستخدمين عدد من الاسئلة البحثية مثل أين ومتى وكيف وبتحليل هذه المصادر يتم الكشف عن كثير من المعلومات المفيدة والتي تسجل تاريخهم. وفى محاولة لتوسيع دائرة البحث تعرض ادلة خاصة بالطلاب للبحث فاذا ادعى طالب انه فنان جيد فعلية ان يقدم الاسكتش الفني والجوائز التي حاز عليها واذا ادعى انة ادار حلقات للنقاش عديدة فعليه ان يقدم شهادات تثبت ذلك. ان الوثائق الخاصة بالطالب هي مصدرا غنيا للاستفسار فعلى سبيل المثال فان طالب ولد في الخليج قد يحمل وثائق سفر من الكويت ، وشهادة ميلاد من قطر وهوية الأسرة المصرية. وفى هذا السياق يختبر الطالب الوثائق مع الرواية الشفهية من خلال الاسرة للوصول الى التفاصيل. ان رحلة شتوية لطالب قطري مثلا مع جدة في مخيم للعائلة في الصحراء يمكن أن يسفر عن قصص متقاربة او مختلفة عن سابقتها في المشهد المتغير والسريع في دولة قطر والتي قد تفسر المسارات الاجتماعية إلى للحياة الحالية بدولة قطر. المنهج المقارن: وفى هذه المرحلة يقوم الطلاب بمقارنة خبراتهم التي اكتسبوها وهذه المرحلة تتيح للطلاب العمل سويا لكى يتبادلون الافكار حول المصادر ولكى يعمقوا تحليلاتهم الأولية والمبنية على حجج سليمة (مصادر) و الحجج المضادة ايضا. وفيما يلى نتائج موجزة للتجربة الصفية: -يعتقد كثير من الطلبة أن مادة التاريخ هي عبارة عن تجميع بعض الأسماء والتواريخ، التي يجب تذكرها وكتابتها في الامتحان، لكنهم لم يدركوا أنه، من خلال قراءة المصدر وتفسيره، يمكن التفكير بأنفسهم في الحقائق التاريخية المختلفة، ويكون الأمر ممتعًا عند طرح عدد من الأسئلة البحثية ومحاولة الإجابة عنها باستخدام أدلة تدعمها. لذلك اظهر هذا النشاط و المنقول عن المشروع البحثي الممول من مؤسسة قطر أن التاريخ أصبح أكثر إثارة عندما يتم دراسته وكتابته من مجموعة من المصادر الأولية ومقارنتها ببعضها البعض للوصول الى الحقيقة حتى اذا كان هذا التاريخ هو كتابة تاريخهم الشخصي. -من أهم المخرجات التعليمية لهذا النشاط هو تغيير طريقة تفكير الطلبة في كتابة التاريخ و من خلال التعمق في فهم المصادر والتحقق منها. -لاحظت كأستاذ للمقررات التي تم تطبيق هذا النشاط فيها تفاعل الطلاب مع عائلاتهم لكتابة تاريخهم الشخصي ومن ثم ادماج العائلات في تعليم ابنائهم وإعطائهم فرصة كتابة تاريخهم من ابعاد اجتماعية تاريخية واقتصادية و جغرافية حيث اثبت هذا النشاط ان اشراك الاهل مع الطلبة في عملية التعلم هي نقطة هامة حيث لا يجب ان يتوقف دعم الاهل للطلبة بمرحلة الثانوي بل لابد و ان يتعدى للمرحلة الجامعية بشكل المشاركة. -سعى هذا النشاط الى فتح دوائر أوسع للطلاب لكتابة تاريخهم الشخصي لاكتشاف أثر الحركات السياسية والتاريخية ، والاجتماعية ، والحروب و الهجرات التي أثرت في حياتهم هو وسيلة لهم لفهم التاريخ وبشكل اعمق وهو ما اتاح لهم الاطلاع على قواعد بيانات وموارد المكتبة الأرشيفية والخرائط، وغيرها من المصادر الأولية ذات الصلة، وما إلى ذلك ليتم دراستها. -هذا التطبيق هو ناتج مشروع بحثى بعنوان «قطر كملتقى طرق: طلاب يكتبون تاريخهم الشخصي ≫ UREP08-049-6-004وممول من برنامج خبرة الابحاث للطلبة الجامعيين والذى اضحى اداة تعليمية فعالة من حيث فهم عميق للتاريخ عن طريق كتابة الطلاب لتاريخهم الشخصي مستعينين بعدد من المصادر الاولية. ونهاية فقد عرضت في هذا البحث لنموذج تطبيقي بحيث يمكن استخدام فكرة كتابة التاريخ الشخصي مستندا على مصادر أولية في تطبيق منظم للتفكير الناقد في مقررات التاريخ. والهدف هو ان يستخدم القائمين على تدريس مادة التاريخ والمعلمين هذا النموذج لتدريب الطلاب علية والاستفادة منة. وعلية يكون عناصر تصميم النموذج هي: مصدر اولى بحيث يمكن استخراج المعلومات التي نحتاجها منة لكتابة التاريخ الشخصي و قائمة من الادلة المتاحة، للإجابة على قائمة من الاسئلة البحثية. وسنتأكد من تحقيق هذا التصميم التعليمي للغرض الذى صمم من أجلة عندما يستطيع الطالب ان يربط بين المصدر والادلة المتاحة ليستنتج الاجابة الصحيحة والتي تتطلب مجهود عقلي ليصل لاستنتاج هذه الاجابات. ان مثل هذا التطبيق التعليمي يدمج كل من التفكير الناقد والاتصال ومهارات التعليم التعاوني لإيصال المعلومات التاريخية عن طريق اشراك الطلاب في التساؤلات البحثية التاريخية من المصادر الاولية. وعلية تقترح الباحثة ان يعطى اساتذة المادة مساحة اوسع لتدريس المصادر من خلال كتابة الطلاب لتاريخهم الشخصي حيث يتيح هذا التطبيق الفرصة للطلاب بان يجمعوا معلومات من مصادرها الاولية ومن ثم التفكير في تفسيرها والتركيب والتحليل وهو اتجاه تعليمي جديد وفعال واستراتيجية تكمل الأشكال التقليدية في تدريس التاريخ مع الدروس والقراءات الأسبوعية والتي تنجح ولكن بشكل جزئي وعلية يقترح تبنى هذا التطبيق من قبل أساتذة التاريخ خلال تدريس مادة التاريخ.
-
-
-
Corporate Social Responsibility (CSR) Effectiveness of UN Policy “Highlight on the Legal Side”
More LessIn light of the massive economic transitions the world has witnessed over the past century, along with the irresponsible social, economic, and environmental practices of several multinational enterprises, the UN has acknowledged that its international conversation with respect to rights and freedoms, primarily limited to governments, and has been unable to achieve same tendency in enterprise. Especially that “Social” responsibility is primarily related to corporates' duty. To this end, and due to its vital contribution to the development process, the UN rules with regard to social responsibility legal rules are likely to be reviewed. Since the 1960s, the UN has started adopting international legal and social initiatives for enhancing corporates' role in the process of sustainable development, such as “OECD Guidelines for Multinational Enterprises 1976”; “Tripartite Declaration of Principles Concerning Multinational Enterprises and Social Policy 1977”; (UN Initiative for Business Sector; “United Nations Global Compact's 2000”), and most: Guiding Principles on Business and Human Rights: Implementing the UN: “Protect, Respect and Remedy 2011”. All these provisions intended to perfect UN policy aiming at standardizing and moralizing economic activities of corporates by urging them to respect a set of rights. This UN policy, in spite of progressing in the right path, still suffers from several deficiencies in the legal aspect, specifically with regard to the level of efficiency and capacity to sustain activating corporates respect to the above-mentioned rights. Investigating the legal aspect of UN policy may seem as a violation of doctrinal and literature research, towards either the nature of the social responsibility which essentially lies on a non-obligatory concept, or towards the very concept of obligation in the international law itself, that often adapts the “soft power” and not the “forcing stick” which implies a dialectical character to this paper, and challenges the readers' insight through establishing a legal doctrine that contradicts the common one of the significance of the mentioned responsibility while “legalizing” it. The legal aspect of this responsibility has been widely discussed in western legal doctrine, particularly in terms of reinterpreting the understanding of the rights implicit in this responsibility, and the roles assigned to the partners, specifically to UN and to corporates as well. Based on the above, this paper represents a legal and critical study on the effectiveness of the UN policy regarding corporate social responsibility. It investigates its policies legal framework towards both, the issue of protection and consistency of this policy. It then discusses the international legal control that guarantees corporates' both, respect and commitments to these rights, either through the legal nature of the international conversation, or through the legal nature of the international control over the issue of corporate social responsibility.
-
-
-
Seeking God's Assistance to Govern: A Comparative Analysis of Islamization Policies in Pakistan and Egypt
More LessThe uneasy relationship between religion and politics is a dominant feature of many post- colonial Muslim majority states. Two most prominent cases in this regard are Egypt and Pakistan. This paper does a comparative analysis of the two countries under the regimes of Anwar Sadat in Egypt (1970–1981) and Zia-ul-Haq in Pakistan (1978–1988). The analysis takes into account the changed role and status of the state in context of public and private life, construction of the new and modification of existing institutional structures to implement Islamic visions of the two leaders. It is noteworthy that even though both leaders tried to Islamize the polity the trajectory followed was radically different. The paper argues that this difference can be attributed to the lack of a universal understanding of the notion of an Islamic State. Furthermore, because of the lack of coherence in defining an Islam state, the question of achieving an Islamic government is also addressed which does aim to seek a universal form of governance. In most cases, Islam is used as a tool to gain legitimacy by the ruling regime. This legitimacy then translates into a political catalyst for governance. State and society relations also played a factor in establishing some form of government whose relationship was classified by the authoritarianism of the regime. The conclusion emphasizes the current situation in both countries. One of the question raised in this paper is can religion and state coexist? Based on the cases of Pakistan and Egypt, religiosity harms society's trust with the state. As religiosity increases, radicalism increases and the Jammat-i-Islam and Muslim Brotherhood movements prove radicalism is the product of a divided, constructed state. The same question is raised in terms of Islamic laws and instiutions. Because there in no universal set of laws or institutions in the imagined Islamic state, one is left wondering how to create a state bearing in mind the need of the ummah, or Muslim community. Islam thus served as the legitimizing principle to govern and in order for Islam to be the political catalyst for change, devout Muslims were chosen to represent the people. Having pre-existing government structures allowed both leaders to achieve a path of Islamic revivalism more fluently. The post-colonial nature of the state impacted the cultural and national identity of both states as many did not know where they came from or what group to identify with. Both leaders raised the question of whether or not to resort back to one's colonial heritage or create a new sense of universalism within the state. In Pakistan, one's ethnic background was important when defining one's personality whereas in Egypt it was one's level of religiosity created their identity. Catering to the needs of a diverse and population became more difficult for each state to achieve. More institutions were created as a result even if some groups were completely marginalized from the process. Once independence was achieved, a new government structure was created in order to meets the demands of the society. With two governments moving in the direction towards a ‘theocracy in its purest form’, power became the authoritative. In terms of institutionalism, Zia believed governance was the knot which tied his Islamization program together whereas Sadat believed text was the same knot. Under different names, both states adopted institutions which ensured Islamic and Shari ’a law were compatible with every functioning institution. Therefore, Pakistan and Egypt fell under the category of a sue do theocracy. Even leaders who came after Zia and Sadat envied them for their dedication towards their missions. What both leaders did so well was manipulate the society in which they were the authority by convincing that whatever they did politically was justified. Even though much of both societies were marginalized, they eventually became convinced that in the long-run they would be better off. In terms of the current situation in both countries, data suggests Pakistan falls under the category of a democracy while Egypt a closed anocracy. Such a situation can be attributed to the Islamization policies adopted in the 1970’s. The only way such decision were defended was by claiming God had chosen who had the right to govern and establish an Islamic state. Believing one was chosen by God to rule was the first miscalculation, the second being believing religion and politics are inseparable. Both leaders became obsessed with the idea of Foucault's bio-politics, that they forgot what Islam was really all about. If Zia and Sadat had understood what the Quran said about governance, they would realize no one form of government exists and it is up to both the leader and its citizens to decide how they wish to be governed. It is up to Islamic scholars and those qualified to speak and interpret Islam to recommend laws and institutions to govern a society. If governments continue to regulate one's religious practices and behaviors, the freedom to practice faith decreases, with state legitimacy rising from the leadership point of view. To conclude, the changing nature of Muslim-majority states can be rendered to the rise of Islamic movements after the post-colonial era. Both Anwar Sadat and Zia ul-Haq felt their respective states were moving in the wrong direction because of the lack of religious fervor. Because of the lack of understanding of Islamic law and jurisprudence, two different visions for an Islamic state were created in the minds of both leaders. This resulted in two radicalized states whose approach to governance differed. Although the trajectory towards Islamizing was different, the desire for power and need to be accepted by society was similar in both cases. Overall, this comparative analysis takes into account all of the aforementioned factors while coming to the conclusion there is no concrete definition of Islamic governance and the impact of these Islamization programs will always exist unless a radical shift in ideologies occurs.
-
-
-
Employability Enhancement Model for Young Qatari Graduates
Authors: Gokuladas Vallikattu Kakoothparambil and Sandhya MenonIntroduction
Qatar, as a progressive nation, has recognized the importance of investing in human capital with a view to have a strong developed economy and to facilitate the transfer of technologies. As such, well educated workforce is essential for creating, sharing, disseminating and using knowledge effectively. Improving the quality at every level of education system from early childhood to adult training is a strong prerequisite for turning Qatar into a knowledge based economy (Planning Council, Government of Qatar, 2007). In order to support Qatar's rapid economic growth and increasing presence on the world stage, it is imperative that young Qataris are motivated to play key roles in all areas of economic and social development through appropriate guidance in identifying opportunities for right kind of engagement. However, there has been a growing concern over the employability of Qatari youth (General Secretariat for Development Planning, 2012) especially in the wake of the influx of multi-national companies in Qatar. Quite often, university graduates in Qatar are ill-prepared to take on the challenges of being in the labour force because they are less informed about rigorous standards being adopted in public sector employment and the many career opportunities in the private sector (General Secretariat for Development Planning, 2012), though this phenomenon has not been particular about Qatar as many other countries in the world face the same concern that those who are leaving higher education institutions are not equipped to meet the demands of labour market (Marzo-Navarro et al., 2009). Thus, the main objective of this study is to evolve a theoretical model of employability enhancement of young Qatari graduates that could be further explored through empirical research programs in Qatar. Theoretical Framework Employability may be crudely defined as the possibility to survive in the internal or external labour market (Thijssen et al., 2008). According to Ronald et al., (2005), the term employability varies from different stakeholders like employer, employment seeker and policy makers. While employers consider someone with appropriate employability skills and attributes may be ‘employable’, this may be only the minimum criterion when considering candidates, whereas from job seekers perspective, a lack of availability of enabling supports (such as transport to work) or contract terms (such as the requirement for shift work) may mean that a specific job is not acceptable. From a policy-maker's perspective, the fact that the person does not take the job and remains unemployed suggests that (within the context of a specific vacancy or job role) the person is not ‘employable’. A well educated workforce is essential for creating, sharing, disseminating and using knowledge effectively. The spectacular growth and increasing diversification of Qatar's economy has opened up number of avenues for Qatari nationals in the arena of education, training and employment opportunities, especially for youth. Qatar National Vision 2030, which is the driving force behind these developments, is intended to empower all nationals, and more directly women, by identifying respective goals to advance their position and status in the society (Al Matawi, 2011). Improving the quality at every level of education system from early childhood to adult training is a strong prerequisite for turning Qatar into a knowledge based economy (Planning Council, Government of Qatar, 2007). In view of the above, it is the need of the hour to device a research model that would be of greater advantage for carrying out research in the area of employability in Qatar. Of late, Qatar has made major improvements in the Education and Training Sector, yet there is still a need for continued development (Supreme Education Council, 2012). An analysis of the current situation of the education system shows that Qatar still faces challenges affecting both supply and demand for education and training and its right connection to the labor market. In order to overcome such challenges, we have to answer the following research questions. To what extent the young Qatari graduates are aware of the employability skills required to be possessed by them that are most suitable to their immediate assignment? What are the qualities/attributes that various industries in Qatar looking out while recruiting young graduates? To what extent such attributes are incorporated in the outcome based learning in various Higher Education institutions in Qatar? Proposed Employability Enhancement Model Various models on employability (Law and Watts, 1977, Bennette et al., 1999, Yorke and Knight, 2004, Bridgstock, 2009, Popovic & Tomas, 2009) have undoubtedly brought about specific qualities that prospective employees need to possess in order to be employable soon after their studies. Most of these studies emphasis more on those interpersonal qualities that a young graduate may lack because of the transition stage of adolescence. Also, the complexity of employability and the variety that exists in curricula of higher education mean that no single, ideal, prescription for the embedding of employability can be provided (Yorke & Knight, 2004). However, most of these studies were conducted in developed nations where the context of employability would be different from that of a developing nation where education systems could be at its infant stage. As such the quality of capabilities that individuals from these countries possess would be different from that of their counterparts in various other countries. For example, the major challenges that are in the way of enhancing the competitiveness of Qatari nationals include underachievement of Qatari students in math, science and English language at all levels, weaknesses in educational administration including the preparation and development of teachers, insufficient alignment between the national curriculum and the needs of the labor market, low standards in some private schools and inadequate offerings of multiple pathways beyond the secondary level, resulting in limited opportunities for Qataris to continue their education after secondary school and throughout their lives (Supreme Education Council, 2012). This context brings about the necessity of having a tailor-made Employability model for young Qatari graduates that could substantially result in their better employability as specified in Fig. 1. LaunchPAD Model of Employability Proposed LaunchPAD model of employability revolves around following four major pillars that could result in employability of young graduates. Launch Prepare Action Deploy Launch This stage is characterized by all those activities undertaken by the educational institution on the arrival of new undergraduate. This includes activities like making students understand the purpose of education, helping them to set goals as per their own capabilities and imbibing in them a sense of achievement so as to motivate them to reach their goals in the best possible manner. Prepare Soon after the Launch stage wherein students become aware of their own capabilities and the goals set forth by them, they move on to the next stage called Prepare wherein they became aware of employability skills that the industry they look forward to be a part of, require them to possess. This stage requires tremendous support and feedback from each industry in Qatar from time to time with respect to various skill requirements set forth by them. This data from the industry would be helpful for the educational institutions to incorporate such qualities into the graduate attributes in their outcome based pedagogy. Action Having understood the technical and non-technical knowledge and skills requirements of various industries, the Action stage will have specific industry based training programs to enhance the employability of target students who aspires to become part of such industry. The knowledge and skills acquired at this stage would make the student more confident in aligning their goals with that of the industry. There is a need to filter the students to ensure that they acquire certain degree of proficiency in terms of these skills to move to the next level. Deploy At this stage, student having acquired necessary theoretical knowledge and skills vis-à-vis the industrial requirement, they need to experience the workplace so as to put into practice their knowledge and skills. This could be arranged in collaboration with various industries through activities like internship, projects, apprenticeship etc. This opportunity not only provides students to get a feel of the workplace and to choose the appropriate company of their choice but also facilitates employers to identify the right talent for their organization. Efficacy of LaunchPAD Model of Employability Qatari youth make up 15% of the Qatari population, who are generally driven by highly influenced by tribal authority and traditional culture. With globalization, family life in Qatar is also undergoing substantial changes, particularly as women are increasingly encouraged to take part in economic, political and social activities. In order to support Qatar's rapid economic growth and increasing presence on the world stage, it is imperative that young Qataris are motivated to play key roles in all areas of economic and social development through appropriate guidance in identifying opportunities for right kind of engagement. This approach is especially relevant for those who pursue tertiary education wherein often university graduates are ill-prepared to take on the challenges of being in the labour force because they are less informed about rigorous standards being adopted in public sector employment and the many career opportunities in the private sector (General Secretariat for Development Planning, 2012). LaunchPAD Model of Employability, simplistic in nature, has been developed from a practitioner's point of view. This model is expected to motivate students to set their own goals and help them to achieve those goals through systematic interventions from educational institutions and industry. This model will eventually help graduates to perform “graduate level job” by not just resorting to any job after graduation (Pool & Sewel, 2007), by helping them to secure a “self-fulfilling” occupation. This model will be of great assistance to educational institutions, industry firms and training organizations in Qatar to ensure right kind of talent is available at the right kind of work environment to achieve greater economic growth.
References
Al-Matawi, S. (2011). Key Issues, Challenges and Opportunities Confronting Qatari Youth Today. Background paper for Qatar Human Development Report: Expanding Capacities of Qatari Youth. Doha: Qatar General Secretariat for Development Planning and United Nations Development Programme.
Bennett, N., Dunne, E. and Carré, C. (1999), “Patterns of core and generic skill provision in higher education”, Higher Education, Vol. 37, pp. 71–93.
Bridgstock, R (2009) The graduate attributes we've overlooked: enhancing graduate employability through career management skills, Higher Education Research & Development, vol 28, no 1, pp 31–44.
General Secretariat for Development Planning, (2012). Expanding the Capacities of Qatari Youth. Available at http://planipolis.iiep.unesco.org/upload/Qatar/Qatar_HDR_2012_English.pdf (Accessed on 01 July 2014).
Law, W. and Watts, A.G. (1977), Schools, Careers and Community, Church Information Office, London. Marzonavarro, m., pedrajaiglesias, m., riveratorres, p. (2009)“Curricular profile of university graduates versus business demands: Is there a fit or mismatch in Spain?” Education & Training, Vol. 51 pp.56–69.
Planning Council. Government of Qatar (2007). Turning Qatar into a Competitive Knowledge-Based Economy. Available at http://siteresources.worldbank.org/KFDLP/Resources/ QatarKnowledgeEconomyAssessment.pdf (accessed on 01 July 2014).
Pool, L.D & Sewel, P. (2007) The key to employability: developing a practical model of graduate employability, Education + Training, Vol. 49, Issue.4, pp. 277–289.
Popovic, C and Tomas, C (2009) Creating future proof graduates, Assessment, Learning and Teaching Journal, vol 5, pp 37–39.
Supreme Education Council, (2012). Education and Training Sector Strategy 2011–2016. Available at http://www.sec.gov.qa/En/about/Documents/ Stratgy2012E.pdf (Accessed on 01 July 2014).
Ronald W. McQuaid and Colin Lindsay, (2005). The Concept of Employability. Urban Studies, Vol. 42, No. 2, 197–219, February 2005.
Thijssen, J.S.L., Van Der Heijden, B.I.J.M., and Rocco, T.S., (2008). Toward the Employability–Link Model: Current Employment Transition to Future Employment Perspectives, Human Resource Development Review, Vol. 7, No. 2, pp.165–183.
Yorke, M. and Knight, P.T. (2004), Embedding Employability into the Curriculum, Higher Education Academy, York.
-
-
-
A Qualitative Study of Student Perceptions, Beliefs, Outlook and Context in Qatar: Persistence in Higher Education
Authors: Batoul Khalifa, Ramzi Nasser, Atmane Ikhlef, Janet S. Walker and Said AmaliQatar has gone through an educational reform in year 2000; its educational and particularly schooling system went through a major overhaul from K-12 reaching higher education providers. The major reasons for the educational reform were to increase the level of student academic achievement. Concomitantly, the rapid growth of Qatar's economy over recent decades has created a situation in which the demand for skilled labor far exceeded the supply of qualified Qatari nationals. The Qatar National Development Plan identified acute needs for highly educated and skilled Qatari nationals in the areas of health and biomedical sciences, engineering, energy and environment, and computer and information technology (Qatar National Development Strategy, 2011). Two significant higher education providers' serve post-secondary students, being Qatar University (QU) and American Branch Universities at the Qatar Foundation have grown tremendously over the years. Understanding the factors that affect Qatari students' post-secondary persistence and achievement is crucial for achieving the country's human capital growth. Tinto (1975) deposited his theory about student integration into the academic and social system of the higher education providers, Tinto suggested a multidimensional component which underlined the higher education community engaging students in all aspects of higher education including academic and non-academic. Tinto's theory basically hypothesizes that persistence is determined by the match between an individual's motivation and academic ability and the institution's academic and social characteristics. A second and major model is Bean's (1986) student's intention to stay or leave into the attrition model, derived from psychological theories and based on attitudinal research of Ajzen and Fishbein (1972) which later developed by Bentler and Speckart (1981). Key ideas from the model suggest that a strong association was related to intentions and behaviors and that an undergraduate student decision to persist or dropout was strongly related to affect. One conclusion about student engagement was students need to be satisfied and academically prepared especially those in the first years to achieve success and maintain continuous enrollment in higher education (Astin, 1985; Tinto, 2005; Kuh, 2001, 2007). Tinto's integration theory has received considerable validation of non-academic factors and impacting student continuation (Pascarella & Terenzini, 1977; Terenzini & Pascarella, 1977; Chapman & Pscarella, 1983; Pascarella & Chapman, 1983). The latter model has received empirical validation and support based on a large number of studies that looked at background information as the socioeconomic levels of students' families and its effect on postsecondary continuation in higher education (Astin & Oseguera, 2004; Sewell & Shah, 1968). With the large number of studies coming from the United States (US) and other western countries (Kenny & Stryker, 1994; Dekker & Fischer, 2008) have underlined the differences on how students develop and internalize beliefs, needs, and wants that in turn impact academic motivation to persist and succeed in higher education. While few studies have emerged from the Middle East, the recent establishment of the Middle East and North Africa Association of Institutional Research has prompted many researchers in this area to seek the understanding and experiences of students in higher education. In Qatar for instance, the first year experience study and National Association of Colleges and Employers Survey have just recently been implemented at Qatar's national public university. Faced with the danger of students dropping out from the university, and a large number of students who are likely to remain in the first years for longer years reflects the dangers of higher education being a bottleneck to economic development and human resource development (Qatar University Fact Book, 2011). One clear indication and dangers of Higher education completion rates as anywhere in the world is disparaging. It has been reported that in the US for instance 55% of undergraduates who begin study at a 4-year institution complete a degree at that same institution within 6 years of their initial enrollment and another 7% completed baccalaureate degrees within six years after attending two or more institutions (Lotkowski, Robbins & Noeth, 2004; Kuh, Kinzie, Buckley, Bridges, & Hayek, 2007). Pascarella (1985) and Adelman (2006) came to the conclusion that continuous enrollment is the most powerful variable in explaining degree completion and time to degree. There are several factors academic and non-academic likely to affect students as they make the transition to post-secondary institutions. Many students may experience stress, anxiety, withdrawal, and even depression (Robbins, Lauver, Le, Davis, Langley & Carlstrom, 2004; DeStefano, Mellott & Peterson, 2001; Feldt, Graham, & Dew, 2011; Wie, l & Zakalik, 2005). There are also a variety of non-academic challenges that have bearing on the likelihood on academic persistence and success of students. A fairly large body of research undertaken in a number of countries have examined the experiences of international students, and compared their experiences to those students of native to the host country. Academic factors (i.e., secondary preparation) appear to influence postsecondary success (see Robbins, Lauver, Le, Davis, Langley & Carlstrom, 2004). But also a range of non-academic factors, influenced by culture and values, thus may contribute to challenges faced by students in higher education in their local context. This study attempts to address Qatari student challenges in their lives in the higher education in Qatar. The study draws on student, perceptions, beliefs, outlook and context; we approach the study through grounded means by leading interview questions through exploration and probing. The approach is grounded in ways that no specific theory drives the questions rather the responses from the interview often call upon theory to justify the findings. The sample will be made of 35 students who were interviewed through probing and questioning techniques. The questions will probe and guide students with converging responses leading to themes. The long-term goal of this line of research is to provide the Qatari society with mush needed scientific information regarding the challenges that its students face in completing their university education at the competence needed to build Qatar's human capital to support its rapidly expanding economy. Finally, we believe that is a broader regional need for specific and focued information on this topic as the study findings are directly applicable to students from several other countries in the region.
Reference
Adelman, C. (2006). The Toolbox Revisited: Path to Degree Completion From High School Through College: U.S. Department of Education. Ajzen, L., & Fishbein, M. (1972). Attitudes and normative beliefs as factors influencing behavioral intentions. Journal of Personality and Social Psychology, 21(2), 1–9.
Astin, A. W. (1984). Student involvement: A developmental theory for higher education. Journal of college student personnel, 25(4), 297–308.
Astin, A., & Oseguera, L. (2004). The declining” equity” of American higher education. The Review of Higher Education, 27, 321–341.
Bentler, P., M. & Speckart, G. (1981). Attitudes “cause” behaviors: A structural equation analysis. Journal of Personality and Social Psychology, 40(2), 226–238.
Chapman, D., W., & Pscarella, E., T. (1983). Predictors of academic and social integration of college students. Research in Higher Education, 19, 295–322.
Dekker, S. & Fischer, R. (2008). Cultural differences in academic motivation goals: A meta-analysis across 13 societies. Journal of Educational Research, 102(2), 99–110.
DeStefano, T.J., Mellott, R.N., & Peterson, J.D., (2004). A preliminary assessment of the impact of counseling on student adjustment to college. Journal of College Counseling, 4, 113–121.
Kenny, M. E. & Stryker, S. (1994). Social Network Characteristics of White, African-American, Asian and Latino/a College Students and College Adjustment: A Longitudinal Study.
Kuh, G. (2001). Assessing what really matters to student learning: Inside the national survey of student engagement. Change, 33(3), 10–17.
Kuh, G. (2007). What student engagement data tell us about college readiness. Association of American College and Universities (AAC&U).Peer Review, 9(1), 4–8. Retrieved June 26, 2015, from http://www.aacu.org/publications-research/periodicals/what-student-engagement-data-tell-us-about-college-readiness.
Kuh, G. D., Kinzie, J., Buckley, J. A., Bridges, B. K., & Hayek, J. C. (2007). Piecing Together the Student Success Puzzle: Research, Propositions, and Recommendations. ASHE Higher Education Report, Volume 32, Number 5. ASHE Higher Education Report, 32(5), 1–182.
Lotkowski, V.A., Robbins, S. B. & Noeth, R. J. (2004). The role of academic and non-academic factors in improving college retention. ACT Policy Report. Qatar National Development Strategy (2011). Qatar General Secretariat for Development Planning: Doha. Pascarella, E. T. (1985). Racial Differences in Factors Associated with Bachelor's Degree Completion: A Nine-Year Follow-up. Research In Higher Education, 23(4), 351–373.
Pascarella, E. & Chapman, D. (1983). A multi-institutional path analytical validation of Tinto's Model of college withdrawal. American Educational Research Journal, 20, 87–102.
Pascarella, E. & Terenzini, P. (1977). Patterns of student-faculty informal interaction beyond the classroom and voluntary freshman attrition. Journal of Higher Education, 48, 540–552.
Robbins, S.B., Lauver, K., Le, H., Davis, D., Langley, R., & Carlstrom, A., (2004). Do psychosocial and study skill factors predict college outcomes? A meta-analysis. Psychological Bulletin, 130, 261–288.
Sewell, W., & Shah, V. (1968). Social class, parental encouragement, and educational aspirations. American Journal of Sociology, 73, 559–572.
Terenzini, P. & Pascarella, E. (1977). The relation of students precollege characteristics and freshman year experience to voluntary attrition. Research in Higher Education, 9, 347–366.
Tinto, V. (1975). Dropout from higher education: A theoretical synthesis of recent research. Review of Educational Research, 89–125.
Tinto, V. (2005, January). Taking student success seriously: Rethinking the first year of college. In Ninth Annual Intersession Academic Affairs Forum, California State University, Fullerton (pp. 05–01).
Wie, M., Russell, D.W., & Zakalik, R.A., (2005). Adult attachment, social self-efficacy, self-disclosure, loneliness, and subsequent depression for freshman college students: A longitudinal study. Journal of Counseling Psychology, 52, 602–614.
-
-
-
Adjustment to College in the United States: Perceptions of Qatari Students
Authors: Janet Walker, Jennifer Blakeslee, Batoul M Khalifa, Ramzi Nasser and Atmane IkhlefGiven the rapid growth of Qatar's economy over recent decades, workforce demand for highly-skilled Qatari nationals has increased (Berrebi, Martorell, & Tanner, 2009; Qatar General Secretariat for Development Planning, 2008). There is therefore a great deal of interest in supporting Qatari student success at high-quality post-secondary educational institutions, both in Qatar and abroad. An increasing number of Qatari post-secondary students, particularly males, are specifically choosing to attend college in the United States, with 1,191 Qatari college students studying abroad in the US (Institute of International Education, 2014). Because these Qatari students are earning degrees abroad for the purpose of fulfilling critical jobs when they return, it is important to understand factors contributing to the academic persistence and performance of Qatari students in the US. This qualitative study is part of a collaborative research effort undertaken by investigators based in the United States and Qatar to better understand Qatari student perspectives on their post-secondary adjustment and success. Here, we report findings from structured interviews with Qatari nationals studying abroad in the US, all of whom were males who were, or had recently been, undergraduates at state universities and/or community colleges in Oregon (n = 21). Approximately two-thirds were in business or economics programs and about a third in STEM programs (science, technology, engineering, or mathematics). Most of the interviews were conducted in Arabic and translated into English for transcription, coding, and thematic analysis (Braun & Clarke, 2006). The goal of the study described here was to augment the existing literature about international college student persistence and academic performance with a qualitative, open-ended exploration of Qatari students' perceptions of the barriers to, and facilitators or potential facilitators of, their adjustment to college in the United States. In general, study findings resonate with much of what is known about the adjustment experiences of international students in unfamiliar settings—specifically regarding second language proficiency, other academic factors, social support, and daily living experiences—with additional lessons learned for specifically supporting Qatari students in English-speaking post-secondary institutions. For example, second language proficiency consistently appears in the literature as the most important factor influencing international student adjustment, particularly since a lack of proficiency in the host language can interact with other potential stressors in both academic and sociocultural domains. In the present study, Qatari students almost universally reported challenges related to mastering English to an extent that would make it possible for them to undertake college-level coursework successfully. In particular, students noted that they lacked the specialized vocabulary needed for college-level work, and many lacked confidence in their ability to communicate with other students and professors. These experiences contributed to a sense of isolation and caused problems in daily situations in the community, such as shopping or social settings with English-speaking students. Students also described how their college adjustment was hampered by problems with English despite having participated in classes and programs—both in Qatar and abroad—that were intended to improve their English language proficiency. Students also reported factors that facilitated adjustment by helping them overcome perceived English deficiencies, and many said their English improved as a result of being in English-only environments, for example, through a home stay program, by seeking out English-speaking friends, or through taking classes with non-Arabic speaking students or professors. Beyond language difficulties, many other academic factors can pose challenges to international students' adjustment. Studies have documented several types of stressors resulting from a mismatch between students' previous academic experiences and what is required for success in the host institution, and many of these are reflected in the Qatari student experiences in Oregon. For example, students may be under prepared in terms of mastery of prerequisite material, may be accustomed to teaching and learning styles that differ from those typical of Western higher education, and may experience additional academic stress if they feel that they are failing to live up to family expectations and/or the expectations of a sponsoring organization in their home country (Smith & Khawaja, 2011). In this study, it was common for students to describe themselves as academically unprepared for college in the US, noting differences in expectations between secondary school in Qatar and college in the US, both in terms of the level of effort required, and the expectation for the student to be responsible for his own education. Students also reported academic stress and confusion related to unfamiliar requirements and policies, insufficient or ill-informed advising, or professors that did not understand how to teach international and/or Arab students. Overall, the main facilitators of successful academic adjustment cited by the Qatari students were interactions with helpful students, faculty and staff. Many emphasized that other Arab students were their first line of academic support, some received helpful support from American tutors and advisors, and smaller classes were seen as beneficial. Social support is another key theme that appears frequently in the literature on international students' adjustment (Araujo, 2011; Smith & Khawaja, 2011; Zhang & Goodson, 2011). A lack of social support can contribute to feelings of loneliness, homesickness and/or isolation, and it can also mean that students have fewer resources to draw on in their coping efforts. The literature consistently reflects that international students tend to rely on support from “co-nationals” or “co-culturals,”— i.e., other students from similar backgrounds—though relationships with students from the host country are also important contributors to international students' adjustment and well-being (Al-sharideh, Goe, & Al-sharideh, 2014; Du & Wei, 2015; Hirai, Frazier, & Syed, 2015; Zhang & Goodson, 2011). In this study, the Qatari students consistently described how they relied on social support to combat loneliness and isolation, but also for practical information and advice. Almost universally, other Qatari students, as well as students from other Gulf or Arab countries, were seen as the key source of social support. However, these students typically reported having no American friends, though this tended not to be viewed as a problem. Indeed, students described spending most of their discretionary time with co-culturals, and while this was primarily viewed very positively, it could also be a distraction from academic responsibilities. Lastly, research on international student adjustment frequently includes a focus on daily living challenges that arise in an environment that is structured by unfamiliar rules, laws, mores and expectations. However, these difficulties were only somewhat reflected in this study. While the students reported some challenges related to housing and generally feeling “shy” when interacting in the community, the most stressful situations appeared to stem from interactions with formal authority Figs. (immigration officials and the police). More frequently, however, students commented that they felt comfortable and welcomed by Americans and by the Oregon towns and communities in which they were studying. Students in the university towns found them “safe,” “calm,” “quiet,” and “comfortable,” while students in Portland noted that they had chosen to attend school there because the city had a reputation for having little crime, and because the residents were seen as helpful and welcoming. Although this study is limited by the small sample size and the fact that all of the students were attending college in Oregon, findings suggest avenues for further exploration. A key area for future investigation would be the development and testing of programs, policies and interventions consistent with study findings and existing research, and also, in most cases, consistent with what the participants themselves suggested as ways of improving Qatari students' adjustment experiences. For example, students offered several recommendations to reduce adjustment stress through efforts undertaken in Qatar to improve students' preparation prior to their departure for the US. Many comments focused on the need for improved instruction in English, with suggestions regarding how college preparatory programs could be improved to focus on academic writing and reasoning in English. Further, students and researchers alike recommend secondary school information sessions and intensive pre-departure orientation programs focused on what to expect academically, socially, legally, and culturally when studying and living in the US, with experienced study abroad students playing an important role in these efforts. Additionally, some students reported very positive experiences from homestay programs in the US, specifically for accelerated language practice and practical support adjusting to the new country, and such experiences could be better developed and marketed to pre-departure Qatari students.
References
Al-sharideh, A. K. a, Goe, W. R., & Al-sharideh, K. a. (2014). Ethnic Communities within the University: An Examination of Factors Influencing the Personal Adjustment of International Students ETHNIC COMMUNITIES WITHIN THE UNIVERSITY: An Examination of Factors Influencing the Personal Adjustment of International St, 39(6), 699–725.
Araujo, A. A. De. (2011). Adjustment Issues of International Students Enrolled in American Colleges and Universities: A Review of the Literature. Higher Education Studies, 1(1), 2–8. doi:10.5539/hes.v1n1p2
Berrebi, C., Martorell, F., & Tanner, J. C. (2009). Qatar's Labor Markets at a Crucial Crossroad. The Middle East Journal, 63(3), 421–442. doi:10.3751/63.3.14
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology Using thematic analysis in psychology. Qualitative Research in Psychology, 2, 77–101.
Du, Y., & Wei, M. (2015). Acculturation, Enculturation, Social Connectedness, and Subjective Well-Being Among Chinese International Students. The Counseling Psychologist, 43(2), 299–325. doi:10.1177/0011000014565712
Hirai, R., Frazier, P., & Syed, M. (2015). Psychological and Sociocultural Adjustment of First-Year International Students: Trajectories and Predictors, 62(3), 438–452.
Institute of International Education. (2014). Open Doors Data: International Students: All Places of Origin. Institute of International Education, 1–7. Retrieved from http://www.iie.org/en/Research-and-Publications/Open-Doors/Data/International-Students/All-Places-of-Origin/ Qatar General Secretariat for Development Planning. (2008). Qatar National Development Strategy 2011–2016. Doha. Retrieved from http://www.iie.org/en/Research-and-Publications/Open-Doors/Data/International-Students/All-Places-of-Origin/2008-10
Smith, R. a., & Khawaja, N. G. (2011). A review of the acculturation experiences of international students. International Journal of Intercultural Relations, 35(6), 699–713. doi:10.1016/j.ijintrel.2011.08.004
Zhang, J., & Goodson, P. (2011). Predictors of international students' psychosocial adjustment to life in the United States: A systematic review. International Journal of Intercultural Relations, 35(2), 139–162. doi:10.1016/j.ijintrel.2010.11.011
-
-
-
Kids Channels Cause of Autism Spectrum and Leaving the Child Less than Two Years for Television is a Crime
More LessBy how the brain works for the Very young child's mind when they are born to be his mind Loader with operational program acquired genes text of this operational program is to pay attention to what is repeated, pay attention to visual and auditory stimuli. The basis of learning is repetition and stimulate the senses Comes from the interaction, in order for this program works must interact with the child to stimulate the child's mind and its cells nerve The most important year of the child is the first year then the second and child neglect in the first two years whether to let him watch TV or maid or not to interact with it leads to stop some senses and stop the mental and linguistic age of the child in the event that a child has been neglected in the first months or that the brain did not stimulate becomes less active in subsequent years. To answer questions from parents of children autism spectrum more than 300 children from various Arab countries as well as Arab expatriates residing in the foreign and Arab countries and found that 90% of children have been exposed to neglect inadvertently leaving them in front of children's channels in the first year for a long time or screens devices in the first year or Second – The majority of them left by parents to watch the channels that repeat content and songs. We also found more cases that lead to leaving the child of the TV is 1. Working mother 2. Expatriate mother or traveling 3. Pregnant another child early 4. Busy father 5. Ignorance of the seriousness of the TV. We also found the answer from the mothers of the children and some of the videos that the child's behavior after the birth and to the pre-watch TV normally be in terms of sounds, movement and visual communication, hearing, especially in the first 6 months. We also found symptoms of autism spectrum emerged after the child's addiction to watch these channels which - Lack of eye contact - Lack of attention to the appeal - Link TV or screens - Age-stop linguistic and stop mental age where he does not fit the true age - Non-participation of children playing - Trouble sleeping and eating Repetitive movements - problems in some senses such as smell - touch - taste - and the sensitivity of sound. We have reached the symptoms depend on several factors and varies from child, including 1-In any month of the child's age began to watch TV for a long time in the first year it led to a mental age and freeze them know mental age of a child. 2-how much the period that lasted for Child Show TV screens or devices and the way parents interact with the child and that will affect the senses And by how the brain works and the impact of television addiction children how we came to cause these symptoms TV and how it works to freeze the mind and the senses. The child is born and all the memory empty for the senses and the sense of each storage and retrieval program It must be filled to the age of 18 months and the formation of memory for each sense. Repetition is the basis of learning, The basis for the work of the senses is the interaction What happens when you watch the child to repeat content channels and songs and cartoons. First, the sense of sight To operate the sense of sight must be watched repeated things and stores it in memory of sight colors, sizes and shape as well as the normal speed stored in memory in order to be considered and very young children learn best by relating to real live people. In the case that the child was TV it stores the two-dimensional images without the size without touching without smell. What happens in a child's memory - will not be storage size and shape and the smell will not prolong the consideration of the fact things Because he used to watch Photos shown great speed and two-dimensional resulting in lack of eye contact of the child things really are not two-dimensional 2. Example hearing Natural that a child hears different intensity, size and level far and close sounds and different directions And heard his name many times and pay attention to the call and stores voice his name and repetition soon be named in memory of hearing When you call it retrieves the sound of the voice memory and quickly analyze the mind and pay attention to the call quickly In the case of watching TV heard voices and one level - from one direction - will store specific sounds like what you watch from songs and animation films - Due to the channels repeating the content of the songs and songs will be stored these sounds in the audio memory and where the child spends a long time without having heard the sound of his name or Call it will become the voice of his name store down memory because the foundation is repetition will lose the sound of his name and therefore will not respond to the call of his parents, although he pay attention to the voices of the TV - also cause sensitivity of the hearing because the ear will be used on a one-level sounds also happen speech problems Because child labor channels to repeat songs, therefore the child tends to happen almost addictive frequencies songs to find that the child does not respond to the call of his father, but as soon as the employment of children are attracted to channel quickly even if the child was in another room 3. The lack of movement and sitting in front of the TV for a long time Problem will happen in the system estimate distances, balance and touch 4. Eating natural person is an important topic when watching TV in a mind will not eat cares tastings - this is what happens to the child while watching TV will be attentive to the TV will weaken the sense of taste and also in the future and also will not be digested without eating has swallowed digest TV addiction will lead to feelings and not to freeze the growth of social relations and do not care even in the absence of the mother in some cases 5. Speech and becomes a child receives only laziness happens to think and speak weaken the speech where language acquisition results from the interaction with others and not to watch TV and weaken the sense of touch, smell, at least in this moment of interaction depends mental age As the American Academy of Pediatrics recommended preventing children less than two years of watching TV and that our recommended weaning in two years and we recommend that leaving the child less than two years, is television crime.
We demand from channels that repeat content and songs put a warning on the channel to prevent viewing of less than two years for the prevention of the spread of autism spectrum Recommendation prevent children less than two years to watch TV once, especially in the first year of interaction with the child as much as possible with sound and movement and smile and touch. There are some people messages, most of which prove that the child is exposed to neglect is intentional to leave him in front of children's channels, especially with a nanny does not speak baby language was the mother is busy with work or pregnancy in the most important months needed for the child to interact and evolve and also busy mothers to the Internet and social networking sites and it is clear that children channels a major reason for the increased prevalence of autism spectrum leaving the child who is less than two years, is to watch TV screens is a crime.
*Channels of the most influential in the children come to the fore Channel Toyyour aljannah TV by 70% and it shares Space Toon – BRAAEAM TV - Tom and Jerry. Message from one of the mothers of children born harmony boy and girl Posted says that the child has a lack of eyes contact - do not pay attention to the appeal - does not realize what around following orders - the child likes to eat only cake and yogurt An inquiry was made of them Does the child and the girl child and how much TV they been seen an hour and why his sister was not affected The answer is as follows: From the age of 6 months, the child watches TV more than 7 hours per day Is his sister was watching TV a period equal to the period of her brother or less? The answer is that the mother of his twin sister was watching TV a period of not less concerned with road a lot - and always come out with me and leave the other child sits at home with his grandmother. The findings of the researcher:
1. For the prevention of autism spectrum prevents a child less than two years, is to watch TV once and interaction with the child as much as possible with sound and movement and smile and touch and play and that leaving the child less than two years for a maid or TV crime.
2. A children's channels must repetitive writing a warning on its screen to prevent children less than two years of viewing because they cause symptoms of autism spectrum.
3. Can restart the mind and the senses again by reverse engineering and without medication starts to work the mind and the senses notice of appeal identifying himself and prolong the matter and get ready to learn and make up for lost and begin the appearance of improvement in the first week.
-
-
-
Neuroscience and Interior Architecture: Impact on Autism
Authors: Mohamed Cherif Amor and Ahmed ElsotouhyProposal Summary Behavioral evidences indicate that fluorescent lighting among the indoor environmental variables (i.e., noise, ambient temperature, and air quality) plays a critical role in facilitating or hindering daily activities for the neurotypical population (people who do not have autism, dyslexia, developmental coordination disorder, bipolar disorder, or ADD/ADHD) (Rashid & Zimiring, 2008). For a neurodiverse population (e.g., ADD/.ADHD, Autistic, etc.), this becomes more complex (Amor, Oboyle, Pati, Pham, & Jou, 2014; Amor, Pati & OBoyle, 2013; Pati, Amor, & OBoyle, 2012). Specifically, autistic subjects become more distracted under fluorescent lighting, which generates agitation, hyperactivity, stress, and weaker cognitive skills, hence contributing to negative health and performance effects. For autistic subjects, functional neuroimaging suggests increased neural activity in sensory areas of the brain normally associated with stimulus driven processing, and decreased activity in areas normally associated with higher cognitive processing. Hence, people with autism show unusually high activation in ventral occipital areas and abnormally low activation in prefrontal and parietal areas (Baron-Cohen, S. 2004). These findings remain controversial and debatable (Dawson, G. & Watling, R. 2000; O'Neil, Meena & Robert Jones, 2007), particularly that the impact of environmental stimulus (light, color, sound, etc.) were not included. In a collaborative research between Virginia Commonwealth University Qatar, Hamad Medical Center Neuro-Radiology and Clinical Imaging Department, and Shafallah Center for Children with Special Needs, the purpose of this research is to: 1) explore and compare behavioral and neural responses and their impact on cognitive processes of autistic subjects, when exposed to 3 types of fluorescent lighting Correlated Color Temperatures CCT, and 2) explore the impact of different color temperatures on the activation of the prefrontal and parietal areas, brain regions associated with cognition that experience minimal neural activity for people with autism. An experimental design will be used; subjects will be exposed to three types of correlated color temperatures in three applications—healthcare, academia, and commercial—while their neural and behavioral responses will be recorded. The participants undergo 1) an anatomical scan and 2) a functional scan, using Functional Magnetic Resonance Imaging (FMRI) technology. Behavioral data will be analyzed using t-test factor analysis and one-way analysis of variance, while the neural data maps will be analyzed using FSL Neuroimaging Software. This research aims at providing behavioral and fresh neural benchmark data for designers, architects, facility planners, and industry professionals relative to lighting color temperature that facilitates or inhibits cognitive skills of autistic subjects.
Precedents
This line of inquiry finds impetus in Qatar National Research Strategy (2012) Pillar V, Social Science, Arts, and Humanities—develop methodological innovations, new data sources, and new measurements in the social sciences, arts, and humanities. Qatar, while enjoying a period of unparalleled prosperity, is faced with undreamed of opportunities and complex challenges. Among the future challenges is the necessity to establish advanced education, healthcare, and commercial environments that “provide citizens with [built environments] and opportunities to develop to their full-potential” (QNV, 2030, p.18). To address the present need, this study aims at developing a body of neuro-behavioral evidences that can facilitate the development of future design guidelines, further enhancing neurodiverse populations'(i.e., autistic subjects) experiences in their education, work, leisure, and living environments. This subsumes the development of design solutions that do not impede, but rather facilitate. These data are needed particularly that the intellectually challenged institutions are growing in Qatar, including but not limited to Shafallah Center for Children with Special Needs, Awsaj Academy, and the Center for Autism. The present line of inquiry and outcomes will provide data that will benefit domestic, regional, and worldwide populations. “It is very difficult to say how many people have this kind of condition in Qatar simply because the statistics are not accurate, as people do not disclose their disabilities because of social and cultural barriers” (Qatar Peninsula, 2013). However, the World Health Organization (WHO) indicated that the global median rate of autism prevalence has been estimated at 62 per 10,000, although some studies have placed it substantially higher. And for the Middle East, it may be an even bigger concern (Lamb & Lerner, 2015). In a recent study, Simons Foundation for Autism Research Initiative (2014) looked at the prevalence of autism, attention deficit hyperactivity disorder, obsessive-compulsive disorder and Tourette syndrome, in Denmark, Finland, Sweden, and Western Australia, the findings indicated that between 2000 and 2011, the number of diagnoses for each disorder grew between 100 and 700 percent. Likewise, the Centers for Disease Control and Prevention (CDC, 2014) reports that the estimated prevalence of Autism Spectrum Syndrome (ASD) in the United States has increased roughly 29% since 2008, 64% since 2006, and 123% since 2002. Autism statistics in the U.S. is reaching a status which deserves special attention. For instance, more than 3.5 million Americans live with an autism spectrum disorder, and it is predicted that in 10 years the annual services cost to cater for this population will range between $200–400 billion (Autism Society, 2014). Autism, Design and Neuroscience Emerging neuroscience research shows that environmental-related activity such as cognition, perception, way finding, and their behavioral consequences—anxiety, stress, happiness, and arousal—are both reflected in the structures and electro-chemical processes of the brain (Amor, Pati, & O'Boyle, 2013; Pati, Amor, & O'Boyle, 2012; Eberhard, 2007; Mallgrave, 2011; Swanson, 2011; Zeisel, 2006). Behavioral evidences indicate that fluorescent lighting among the indoor environmental variables (i.e., noise, ambient temperature, and air quality) plays a critical role in facilitating or hindering daily activities for the neuro-typical[1] population (Rashid & Zimiring, 2008). For the neuro-diverse population[2], this becomes more complex. Specifically, autistic subjects become more distracted under fluorescent lighting, which generates agitation, hyperactivity, stress, and weaker cognitive skills, hence contributing to negative health and performance effects (Carpman & Grant, 1993; Colman, Frankel, Rit Ritvo, & Freeman, 1976). For autistic subjects, functional neuroimaging suggests increased neural activity in sensory areas of the brain normally associated with stimulus driven processing, and decreased activity in areas normally associated with higher cognitive processing. Hence, people with autism show unusually high activation in ventral occipital areas and abnormally low activation in prefrontal and parietal areas (Baron-Cohen, S. 2004; Howard A. Ring, Simon Baron-Cohen, Sally Wheelwright, Steve C. R. Williams, Mick Brammer, Chris Andrew & Edward T. Bullmore, 1999). These findings remain controversial and debatable (Dawson, G. & Watling, R. 2000; O'Neil, Meena & Robert Jones, 2007) suggesting the need for more systematic research. While there is a growing body of debatable environment behavior literature relative to the impact of fluorescent lighting on cognitive, behavioral, and psychosocial outcomes, little is known about the correlation between neural activity and the impact of fluorescent lighting correlated color temperature (CCT) on indoor behavioral outcomes. Neuroscience has revealed that seeing color activates the ventral occipital cortex, including the fusiform and lingual gyri (Hsu, Sharon & Thompson-Schill, 2012; Morita, Kochiyama, Okada, Yonekura, & Sadato, 2004), but little is known about the changes in this neural activity under different lighting color spectrums—Correlated Color Temperature—CCT and Spectral Energy Distribution—SED.
Objectives/Significance of the Study
The objective of this study is to 1) explore and compare behavioral and neural responses of autistic subjects, when exposed to 3 types of fluorescent lighting Correlated Color Temperatures: a) Warm White WW with a 2700 CCT, b) Cool White CW with a 4100 CCT, and c) Daylight DX with 5500 CT, when presented under three different settings—commercial, educational, and healthcare. 2) The second objective is to explore the impact of different color temperatures on the activation of the prefrontal and parietal areas, brain regions associated with cognition that experience minimal neural activity for people with autism, and 3) compare the present findings with a prior study conducted by our group relative to ADHD populations. This research aims at providing innovative behavioral and neural benchmark data relative to lighting color temperature that facilitates or inhibits cognitive skills of a autistic subjects.
Research Design and Methods
An experimental design will be used for this study to collect behavioral and neural data. The same group of autistic subjects will be exposed to three categories of pictures—academic, commercial, and healthcare, including three different types of Color Correlated Temperature (CCT) for each category. A comparative analysis of behavioral and neural data will be performed to identify similarities and differences. Also, an IRB protocol for conducting the present investigation will be requested from the Virginia Commonwealth University and Hamad Medical Center. A purposive sampling strategy will be used to identify 50 participants living in Doha, Qatar. The sampling will be used in two phases: 25 subjects first year (subjects age ranging between 6–12 years old) and 25 additional subjects for the second year experiment (subjects age range between 12 and up). Participants for this study will be recruited through a close collaboration with Shafallah Center for Children with Special Needs. Data on brain activity will be collected via Functional Magnetic Research Imaging (Siemens 3T) at Hamad Medical Center Neuro-Radiology and Clinical Imaging Department, which is a multi-user neuroimaging facility. The participants will undergo 1) an anatomical scan T1 (5 minutes) and 2) an EPI functional scan (17 minutes), while a random sequence of three types of digitally generated high resolution illustrations from the aforementioned categories (i.e., warm white 2800 CCT, cool white 4100 CCT, and daylight 5500 CCT) will be projected by a computer controlled visual presentation system (E-Prime). Each image category will include 6 images (2 commercial, 2 educational, and 2 healthcare interior environments) for a total of 72 images (18 images blocked by application, 18 randomly organized, then reverse the order) that every participant evaluates. Concomitantly, the participants will be asked to respond to each image by fiber optic button devise, rating each image on a seven-point Likert satisfaction scale of 1 = very dissatisfied and 7 = very satisfied. This procedure will approximately last for 20–30 minutes for each participant. Statistical parametric mapping (SPM8, Wellcome Department of Cognitive Neurology, London, UK, will be used in the imaging preprocessing as well as the statistical analysis.
Anticipated Findings and Dissemination Plans
It is anticipated that the comparison of neural and behavioral data will indicate that the least satisfying color is the Warm White Color Temperature (2800 K). The Cool White 4100K and Full Spectrum 6000K correlated color temperatures might generate better levels of behavioral satisfaction and neural activation of the Cerebellum, the Superior Temporal Gyrus, the Middle Frontal Gyrus, and the Angular Gyrus, respectively responsible for critical structure of social interaction, analytical tasks, and memory retrieval that are very suggestive for the activation of the neural cognitive processes. Members of the research team will contribute papers to peer reviewed international journals, including but not limited to the Health and Environmental Design Research Journal (http://her.sagepub.com/), and the Environment and Behavior Journal (http://eab.sagepub.com/). Team members will also deliver presentations at relevant international conferences, such as the Academy of Neuroscience for Architecture annual conference (http://www.anfarch.org), the Healthcare Design annual conference (http://www.healthcaredesignmagazine.com), and the Environmental Design Research Association (www.edra.org). Similarly, the research findings will be published with QScience.com, an innovative and collaborative, peer-reviewed, online publishing platform from Bloomsbury Qatar Foundation Journals (BQFJ) (www.Qscience.com). The outcome of the research will be further shared with the HBKU Faculty Forum lecture series.
- [1] Neuro-typical (NT) is a concept coined in the autistic community as a label for people who are not on the autism spectrum. The term eventually became used for anyone who does not have atypical neurology, however, in other words, anyone who does not have autism, dyslexia, developmental coordination disorder, bipolar disorder, or ADD/ADHD (National Symposium on Neuro-diversity, 2012).
- [2] Neuro-diverse (ND) is a concept where neurological differences are to be recognized and respected as any other human variation. These differences can include those labeled with Dyspraxia, Dyslexia, Attention Deficit Hyperactivity Disorder, Dyscalculia, Autistic Spectrum, Tourette Syndrome, and others (National Symposium on Neuro-diversity, 2012).
-
-
-
Effective Stakeholder Engagement for Better Water and Energy Governance
More LessThe world is experiencing serious water and energy challenges. The growing world's population is one of the factors that contribute to shortage of resources and make them on top of the global future challenges. The current world's population is 7 billion and it is expected to grow to be 9 billion in 2050. In view of this fact, and if the current consumption rate of water and energy continued, the need for new resources will be increasing. These facts demand the cooperation and coordination of all parties, researchers, policy makers and stakeholders. The latter here means all those who are affected in a way or another by water and energy challenges, the suggested solutions, and results of any newly developed approaches. Stakeholder engagement has been gaining in popularity in water and energy industries for better governance of resources. There is a trend of involving stakeholders in public policy decisions; citizens should know how those decisions are taken. Involving citizens in decision making and in governing the water and energy supply will help governments bring to light the challenges more effectively and manage resources more successfully. At global level, numerous case studies have been carried out to bring stakeholders engagement precisely; this is how the process of engaging stakeholders began to take serious steps towards practice rather than staying in the theoretical stage. However, decision makers need to know with evidence; the best principles that work and the working area of stakeholder's contributions. So far, careful assessments for evaluating the effectiveness of stakeholder engagement are still needed. Such experienced evaluations are essential for understanding how far engaging stakeholders helps in inclusive governance of resources, in both water and energy sectors. This research assesses and evaluates the effectiveness of stakeholder engagement in policy implementation and decision making. It is an evidence-based assessment that helps policy makers know the best working principles and the perfect areas where stakeholders contribute efficiently. Before the assessment stage, the research sheds light on some must- to- study areas: Motives of Stakeholder Engagement Numerous factors flag the necessity of sustainable development of how the resources are governed to better face the future challenges. The main four motives are: demographic and economic trends that push the demand of new resources and limit the ability of governments to respond efficiently; climate change that plays a critical role in scarcity of resources; socio-political issues including new policy, regulations, standards, and sustainability goals, all call the need of adaptive governance; and technologies that accelerate communication and boost relationships. Listing down stakeholders at all levels and the power of each party. The research categorizes stakeholders under three groups: marginalized groups (citizens, poor, youth, and women); novice stakeholders (long-term institutional investors and property developers); and prime stakeholders (governments, suppliers, service providers, involved business and organizations, civil community, lawmakers, and farmers) Barriers Hinder Effectiveness of Engagement: Stakeholders engagement is a somehow complicated process and affected by a verity of factors; it differs from place to another, and from individual or organization to another. However, barriers are common and identifiable. Addressing those barriers help in overcoming them. The following are examples of common barriers and the reasons behind them: Absence of public awareness and concern. The engagement is complicated process. Fuzziness of the topic of how to use stakeholder leads to difficulty of consultation limitation of funding, staff and time lack of leadership and political will lack of strong legal statements that support the case Irregularity of the information Reluctance to give up power and resistance to change lack of sufficient capacity getting consultation from prim categories is not easy Stakeholders Engagement Approaches Engagement of stakeholders comes with a wide range of different approaches. Public information programs, citizens' referenda, workshops, and taskforces, are just a few examples. These approaches are based essentially on objectives, time, and place. The diversity of approaches and its dependency on several factors makes the engagement principle complicated for decision makers. Stakeholders can be engaged through two main approaches: formal and informal. These approaches based essentially on web applications. The research shows the upsides, as well as, the downsides of each approach. The comparison is combined with on-the-ground experiences and case studies. After shedding light on each approach, the research suggests which approach is working for each kind of stakeholders, stage of policy, objectives, and governance level. Key principles and a Checklist for Public Action Providing decision makers with a supportive guide of how to use stakeholder engagement successfully is very useful. This research outlines the necessary requirements for having effective engagement that results in benefits at both short and long term. The offered principles along with Checklist for Public Action, self-assessment tools, and indicators help decision makers discover the defect areas that need improvements. For formal principles, the research discusses the importance of the following systematic, comprehensive approaches for decision makers. It is the way for obtaining far better results and outputs for both the time involved and the required resources. Thus, any issues related to stakeholders, or any rising risks can be managed in more efficient ways. Know your stakeholders very well: address all stakeholders who will be subjected to outputs. Define what motivates them, how will they interact, and the responsibilities they can take over. Answer some questions like who will be affected by the results? Map the objectives of stakeholder's involvement. Put limits to decision making engagement. Depict how their inputs will be used. Share information: distribute the required information and provide stakeholders with suitable resources for both financial and human levels. Put assessment frameworks. The process of engaging stakeholders requires a type of assessment at different stages. These assessments help give feedbacks that are essentials for improvement and adjustment. Outline the process of stakeholders' involvement with precise and transparent policy and legal frames. Surround the process with responsible authorities and organizational principles is necessary. Customize the type and level of engagement to meet the requirements. Add flexibility to the engagement process, so that it adapts any unexpected changes. The Assessment stage of stakeholder engagement There is still a persistent question: How effective is the system of engaging stakeholders? The lack of sufficient evaluation of the system with regard to the cost, benefits and effectiveness calls the need of carrying out efficient analyses and assessments. The research offers an assessment which carried out to evaluate how the contribution of stakeholders in decision making is effective. It then addresses the challenges of efficient evaluation, and finally evaluates the assessment tools to underscore their strengths and weaknesses. The research reviews the procedures and results of the available assessment tools to propose models of expected costs, benefits and risks. When stakeholders do engage successfully, it can be a win-win for decision maker and society. Stakeholders engagement is an absolute necessity for facing the current and future global challenges. The only decision for decision maker is not whether to engage stakeholders, but it should be when and how to effectively engage them. The decision makers need an evaluation of stakeholders engagement that discloses the key principles and the perfect areas where stakeholders contribute efficiently. This research helps them know which approach is perfect for which stakeholder, objective, policy, and governance level.
-
-
-
Locations of Temporary Distribution Facilities for Emergency Response Planning
Authors: Rojee Pradhananga, Danya Khayal, Shaligram Pokharel, Fatih Mutlu and Jose Holguin-VerasResource planning in emergency response phase is challenging primarily because the resources have to be delivered to the affected regions in a timely manner and in right quantities. Disasters such as the hurricanes, epidemics and chemical explosions in general impact large regions and emergency supplies are needed for several days. Demand for the resources in one location at a period may not exist in the next period; or, a particular location may have a very high demand in the subsequent period. This dynamic change in the demand patterns adds further challenges in the planning process. The change in demand both in terms of the location and the quantity is usually tackled through allocation of resources at the prepositioned facilities. However, prepositioned facilities may be small in numbers and distribution of resources to the affected area may require additional funds for transportation and other overhead costs. In such a case, distribution of resources through a number of temporary facilities, located near the demand centers can significantly improve the distribution process thereby decreasing the supply response time. Therefore, in this paper, we propose a network flow model for emergency response planning which provides location and allocation plans of temporary distribution facilities for short distribution periods in the planning horizon. We assume that the individual demands in close vicinity are grouped at so-called aggregated demand points (ADPs). The distribution process initiates from a central supply point (CSP) which is a collection point that continuously acquires the resources and prepares them for distribution. In each distribution period, resource available at the CSP is allocated to the temporary distribution centers (TDCs) to distribute to the ADPs. The model considers periodically changing demands at the ADPs and supply availability at the CSP. Therefore, the decision on location and allocation are the dynamic decisions carried out in each distribution period, and the TDCs located in a period are functional temporarily only for the period. The model allows delayed satisfaction of demand when resources in a planning period are insufficient, and allows transfer of excess resources from a relief facility to another in the next time period. The consideration of the dynamic decision, transfer of excess resources and provision of delayed satisfaction of demand make the proposed model unique and more representative to the actual relief distribution. The objective is to minimize the total social cost which is sum of the logistics and the deprivation costs of all distribution periods. The logistics cost consists of the fixed set up costs and the transportation costs. The deprivation cost is the penalty cost associated with the delayed satisfaction of the supplies. The model is tested in a network for numerical analysis. The analysis shows that the location of TDCs in a time period influences the total cost of response. The results show that relief response can be more effective if movement of excess resources from one period to next is allowed. When such a movement is not allowed, it can increase shortage cost and eventually the total cost of emergency response. The analysis also shows solvability of the model in large and complex problem instances within a short computation time which shows the models' robustness and applicability to solve practical size distribution problems.
Acknowledgement
This research was made possible by a NPRP award NPRP 5 - 200 - 5 – 027 from the Qatar National Research Fund (a member of The Qatar Foundation). The statements made herein are solely the responsibility of the authors.
-
-
-
Making a Culturally Sensitive Symbol Dictionary for Non-Verbal Communicators in the Arab Region
Authors: Amal Ahmad and Amatullah KadousIntroduction
Speech is a complex process requiring intellectual and physical capabilities. At times, this process can become compromised making the verbalization of thoughts difficult, therefore other methods of communication may be required. Alternative and augmentative forms of communication (often known as AAC) can help individuals with a wide range of speech and language such as Autism, Downs' Syndrome, Cerebral Palsy and Aphasia. Examples of alternative means of communication include the use of symbols to convey a message often with synthesized text to speech. This has proven to be an effective alternative and can allow people to hold conversations, participate (Light & McNaughton, 2012) and enhance their quality of life (Hill, 2012). However, there have been some challenges in achieving this in the Arab region due to the lack of Arabic symbol inventories and reliance on Westernized symbols (Hock & Lafi, 2011), and as a result, uptake and the positive outcomes of using AAC have been limited. The Arabic symbol dictionary research team has endeavored to create a new freely available resource that will be the first of its kind and is focused on creating culturally, environmentally, religiously and linguistically appropriate symbols for the Arab AAC community. Through engagement with AAC users, teachers, therapists and parents using a participatory approach, the team have found that current Westernized symbols are not appropriate for the region's culture and lifestyle. This paper aims to discuss what factors the team has taken into account to ensure the symbols are appropriate for the Arabic AAC population. Considerations to be discussed include; cultural adaptations, religious sensitivities, linguistic factors and suitable portrayal of the environment.
Method
At the outset of the project an expert group of advisors advised against the creation of a brand new symbol set and suggested the development of symbols that could complement existing symbol sets. As the research team adopted a participatory approach to solve design and developmental processes, a self-selecting AAC forum of teachers, therapists and parents was introduced to a choice of freely available symbol sets to compare to those they already used. Their choice would form the basis for the symbol dictionary development. The choice having been made, it was then essential to develop a set of criteria in collaboration with the AAC forum that would inform the adaptations made to any symbols that were felt to be culturally inappropriate or lacking in linguistic correctness. Criteria used to review the symbols included; flipping symbols especially those with arrows to follow Arabic sentence orientation from right to left, adapting dress to be modest and in line with Qatari and the modern general Arab dress code, adding darker physical features to characters, changing symbols which contained affection and/or mixing with the opposite gender, reducing greenery in the environment, consideration of social hierarchy, including nanny/maid in symbols related to the family unit, the inclusion of religious holidays and customs, local landmarks and food. Based on these criteria, the graphic designer would adapt the symbols and post them to the team's closed Google+ group for internal review. Once the symbol was reviewed by 3 team members, the symbol was uploaded to the Arabic Symbol Dictionary Symbol Manger where the AAC forum would later vote on a batch of adapted symbols. Symbols accepted as culturally suitable were then uploaded to the ‘Tawasol’ website for download for free by the public. Comments were analyzed and adaptations were made accordingly to the symbols that were voted as culturally inappropriate by the AAC forum and were re-voted on with the next batch of symbols.
Results
In the first round of voting where participants were asked to select their preferred symbol set, the ARASSAC symbol set was chosen. However only 3.4% of symbols were marked as ‘good’ due to the symbols being culturally sensitive. A number of symbols (‘batch 1’) were then adapted by the graphic designer and were voted on by 63 voters. The symbols' cultural acceptability increased to an average scoring of 4.14 out of 5 due to the cultural, religious, linguistic and environmental changes. 16.5% of comments were in relation to changes that needed to be made to the representation of culture (6.8%), religion (7.2%), language (2.4%) and environment (0.1%). A much smaller group of 21 voters voted on the second batch of newly developed symbols and the symbols' average cultural acceptability rating was 4.03 out of 5. 16.8% of comments were in relation to changes that needed to be made to the representation of culture (14%), religion (1.4%), language (1.4%) and environment (0%). The main comments regarding culture included ensuring women were wearing a black Abaya and Shayla and if they were in colored clothing that they did not “look like a maid”. There were mixed comments about whether children should be represented in the symbols as wearing traditional Qatari dress or non-traditional dress. Physical features also were commonly commented on with requests for darker skin tones and facial hair for older male Figs. (e.g. father). Concerns were also raised about some hand gestures used in the symbols that did not agree with the Qatari culture (e.g. hand over chest to indicate ‘thank you’ is not used in Qatari culture). Symbols with Qatari foods such as those with rice and meat/chicken needed to have whole joints of lamb/chicken on the rice rather than small pieces. In terms of religious feedback this included ensuring women's hair was covered, symbols related to eating showed the character eating with the right hand, the five daily prayers showed the correct position for the sun and that women's feet were covered during prayer. Linguistic feedback suggested that symbols for pronouns needed to include the dual form as well as including male and female versions of each symbol, both features of the Arabic language. Environmental feedback included reducing greenery.
Discussion
The results showed a steady improvement in voters' rating of the cultural appropriateness of the symbols. The majority of comments were related to the representation of attire in the symbols and this has been discussed in detail internally within the team and also with participants. Many therapists and teachers requested that characters should be dressed in traditional Qatari dress and many others suggested this would only serve the needs of the Arabic Gulf AAC users and would not appeal to the general Arab population. Through conducting a survey the team came to the conclusion that it was best to create symbols with general Arab clothing and traditional Qatari dress to serve the needs of the widest Arabic AAC population. The differences in results between voting session two and three could be attributed to a number of factors. In the third session, voters were strongly encouraged to provide detailed comments about the suitability of the symbols and this may have caused them to be more critical in their feedback. Furthermore, the second round of voting was primarily on symbols of nouns where their representation tended to be universally accepted compared to those voted on in the third round of voting, which were predominantly actions, interactions and concepts that were less concrete and difficult to convey. It is also possible that voters were more confident with the voting process in the third round of voting and this affected the way they voted, giving them the confidence to be open with their opinions.
Conclusion
The contribution of the AAC forum and AAC users to this project has yielded ample feedback that is essential to making the Arabic Symbol Dictionary relevant and useful. The Arabic Symbol Dictionary team has made an active effort to maintain open communication lines with participants to guarantee, where possible, that the symbols are culturally, religiously, linguistically and environmentally appropriate. It is for this reason, and the lack of freely available culturally appropriate symbols, that participants have requested access to these symbols much earlier than their official release date. Participants also regularly contact the team with lists of words which they require adapted symbols for, that do not exist in available symbols sets or are inappropriate for the Qatari setting. Any symbols that have been designed and voted on have been added to the Tawasol website on a share alike creative commons license (CC BY-SA 4.0 license)[1] so they can be used by the community and companies providing communication devices. The team is also concurrently seeking feedback from other GCC and Arabic speaking countries to ensure this resource is utilized and made available to a widespread audience. With the majority of Arab countries having between 4–5% of disabilities attributed to speech impairments and numerous Arab countries showing between 13 and 15% of all disabilities related to communication, (Disability in the Arab Region, 2014) the need for such a resource is becoming increasingly apparent.
References
Hill, K. (2010). Advances in augmentative and alternative communication as quality-of-life technology. Physical medicine and rehabilitation clinics of North America, 21(1), 43–58.
Hock, B. S. & Lafi, S., M. (2011). Assistive Communication Technologies for Augmentative Communication in Arab Countries: Research Issues. UNITAR e-Journal, 7(1), 57–66.
Light, J., & McNaughton, D. (2012). Supporting the communication, language, and literacy development of children with complex communication needs: State of the science and future research priorities. Assistive Technology, 24(1), 34–44.
Economic and Social Commission of Western Asia (ESCWA) League of Arab States, (2014). DISABILITY IN THE ARAB REGION AN OVERVIEW. Retrieved from http://creativecommons.org/licenses/by-sa/4.0/
-
-
-
Momentum for Education Beyond 2015: Improving the Quality of Learning Outcomes & Enhancing the Performance of Education Systems in the GCC: Kuwait & Qatar Cross Country Analysis
Authors: Faryal Khan and Iman ChahineThis cross-case study analysis is based on two case studies in two GCC countries: Qatar and Kuwait. The studies gathered data through a combination of quantitative and qualitative methods to analyze existing learning outcomes, teacher efficacy, and the extent to which the instructional strategies align with curriculum expectations in math, science, and reading. The analysis was carried out in two stages. First, we conducted separate within-case analysis on each country using matrices to discern patterns and compare trends. In the second round, we conducted cross-case analysis identifying main themes across both sites, and comparing and contrasting findings across school districts. To conduct the within-country and across-country analysis, we employ three parallel processes: meta-analysis of quantitative data; coding and categorizing strategies, and display strategies. The cross case analysis is motivated by the global efforts to promote the use of quality learning indicators in education and by the Framework for Action for EDUC 2030. We argue that the effective use of educational outcomes is essential to improving the overall quality of learning.
Cross Case Context and Purpose
The cross case analysis is conducted on two GCC Member States, Qatar and Kuwait, two countries with emerging economies and comparable income-levels. The purpose of this analysis is to provide insights on the challenges and constraints that are impeding improvement in the quality of education and system-performance in the GCC countries, as well as inform potential prospects for education post 2015. The specific objectives of the analysis are three-fold: 1) to examine learning outcomes in each country in order to ascertain strengths and weaknesses of the education systems in achieving the Education for All Goals; 2) monitor progress for achieving EFA goals by gathering quantitative and qualitative data; and 3) identify potential measures to develop a post-2015 education approach in the GCC Countries.
Cross Case Research Questions
This cross case analysis targets that following questions: What are the patterns in teaching efficacy across cases/sites? Are there differences in efficacy beliefs between males and females across content areas (math, science, and reading) and grade levels (Grades 4 & 8)? What are the instructional strategies that teachers employ to help students achieve learning outcomes, content and cognitive processes, specifically in grade 4? How is the performance of grade 4 students in Qatar compared to their counterparts in Kuwait in mathematics?
Cross Case Analysis Methodology
Individual case studies were conducted using quantitative and qualitative research techniques. Each case was examined and a case matrix developed to include the major concepts of the research questions. Following the development of individual case reports, a cross-case matrix display was developed for each of four critical issues underlying the related research questions (teacher self-efficacy, instructional design/strategies, and learning outcomes/student achievement.) Sample of schools Data in the cross analysis was collected using two random samples: 22 public schools from Kuwait and 28 independent schools in Qatar (See Table 1 & 2 attached). Individual case studies were based on multipurpose and nationally representative samples of boys and girls schools randomly selected from 6 districts in Kuwait and 7 districts in Qatar. The choice of participating districts was based on data availability, using surveys addressed to: 1) school principals and subject coordinators, 2) math, science and reading teachers, and 3) students in grades 4 & 8. We employed four data collections techniques to collect qualitative and quantitative data: survey questionnaires, observation of selected classrooms, tests, and focus group discussions. Student questionnaires identify student cognitive and affective dispositions towards learning and teacher surveys measured teaching efficacy and beliefs.
Cross Case Analysis Findings/Discussion
We employ two units of analysis: teachers and students. For teachers, we focus on the following themes: teacher self-efficacy and instructional design/strategies. For students, we focus on learning outcomes/student achievement. The two case studies suggested shared similarities and differences across the aforementioned themes.
Theme 1: Learning Outcomes/Student achievement in math (grade 4) There was a significant difference (p = .000 at α < .05) of around 5 points in the group mean score between the females and males' scores across districts. In particular, the females (mean score = 19.26 points) outperformed the males (mean score = 14.21) in all six districts. However, in Qatar, overall, boys (Mean = 36.85, SE = 1.199) in Grade 4 did significantly better than girls (Mean = 31.99, SE = 1.508) on math achievement test with a 5-point group mean score higher than that of the girls.
Theme 2: Teacher Self Efficacy In Kuwait, we found significant differences (p = .04 at α < .05) in teachers' perception regarding their ability to teach their subjects across the different governorates. Furthermore, we noted significant variations across governorates in terms of how teachers perceive the impact of their efforts on student achievement. For example, we found significant differences in degrees of agreement on Q10 (When a low-achieving child progresses it is usually due to extra attention given by the teacher) (p = .015 at α < .05); on Q12 (The teacher is generally responsible for the achievement of students) (p = .045 at α < .05); and on Q13 (Students' achievement is directly related to teacher's effectiveness in teaching) (p = 0.28 at α < .05). The differences were mostly prevalent between the suburban (such as Al-Ahmadi and Jahraa, the two largest governorates in Kuwait) and the urban districts, which comprise four suburban governorates. Moreover, significant differences in level of agreement were reported in the way parents in different governorates perceive the role of the teacher (Q14: If parents comment that their child is showing more interest in mathematics at school, it is probably due to the performance of the child's teacher) (p = .015 at α < .05) (See Fig. 10). For the same reason, the differences seem to arise between urban (Hawalli) and suburban (Jahraa) governorates. We noted significant differences between teachers' responses across girls and boys schools particularly in relation to teaching effectiveness and student learning. Furthermore, across math, science and literacy teaching, we also noted significant differences (p = .010 at α < .05) only in teachers' expectations regarding other teachers' beliefs in their students' learning. On the other hand, in Qatar we found significant differences between female and male teachers' responses on q2 (I will continually find better ways to teach mathematics), q17 (I wonder if I will have the necessary skills to teach Mathematics), q18 (Given a choice, I will not invite the principal to evaluate my mathematics teaching), q20 (When teaching mathematics,
I will usually welcome students' questions). Across districts, we found significant differences among teachers in q2 (I will continually find better ways to teach mathematics) and q12 (The teacher is generally responsible for the achievement of students in mathematics). Across Grade levels, we noticed significant differences between teachers on q9 (The inadequacy of a student's mathematics background can be overcome by good teaching) and q14 (If parents comment that their child is showing more interest in mathematics at school, it is probably due to the performance of the child's teacher.) We also compared teachers' expectations of their students across subjects, school districts, Grade level and gender and found significant differences in teachers' expectation only across Grade level.
Theme 3: Instructional design/strategies In both countries, an examination of the results of the Teaching Method survey indicated that, particularly for math and sciences there is a more focused approach on basic drill and practice techniques. Additionally, there was a moderately low rate of integrating computer applications in teaching mathematics and sciences at the middle level. Implications Qatar and Kuwait case studies yielded different findings with respect to teacher self-efficacy and student achievement (grade 4). However, the commonality that emerged concerned teaching methods, namely the focus on drill and practice in teaching. This has implications to policy and practice. However, teachers cannot carry the responsibility alone. We argue that unless teachers are supported with well-designed curricula and assessment strategies to improve teaching and learning, it would be hard to improve quality in education outcomes and learning thus achieving the post 2015 goals.
-
-
-
Sporting Events and Inter-Religious Relations: Qatar's World Cup 2020
Authors: Gurharpal Singh and Lawrence SaezThe development of mega sports events over the last thirty years has been underpinned by the case of urban regeneration or national developments. The literature on urban policy and planning highlights the interplay between mega-sporting events and notable transformation in urban development (Gratton and Henry 2002, Hiller 2000, Chakley and Essex 1999). As cities and nations have sought to bid competitively for major sporting events (e.g., the World Cup, the Olympics, Asian Games, and regional sports competitions) to both showcase achievement as well as create opportunities for the development of new physical infrastructure. They also provide a potential forum from which to enhance the tourist industry towards the host cities or countries. The issue of urban regeneration through sporting events has become a major theme for urban policy and planning. The impacts of seemingly ongoing economic restructuring, technological as well as policy change has meant that the basis for many urban economies has undergone fundamental shifts within which certain areas have high levels of social exclusion and deprivation (Hall 2004). The concept of urban regeneration through sporting events includes both physical and social dimensions (Page and Hall 2003). The physical component of urban regeneration is primarily concerned with architecture and image, whereas the social dimensions of urban regeneration are concerned with improving the quality of life of those who already live in target areas. Only rarely, if at all, have the implications of mega-sporting events been promoted for their social or cultural significance. The London 2012 case for the Olympics, for example, was predicated on the importance of London as capital which promoted and valued social, ethnic and religious diversity of an increasingly cosmopolitan society (Fussey, Coaffee, Armstrong, Hobbs 2011, Raco and Tunney 2010). The value of sport to further enhance such social change should not under-estimated, whether it is at the level of the city or a nation. Although some of the literature on urban planning has stressed the potential sociological transformation that emerges from hosting major sporting events, primarily through the construction of a national identify (Tomlinson and Young 2006, Horne and Manzereitter 2006), there is a major gap in the literature on the impact of sporting events on inter-religious relations. The Qatar World Cup of 2022 offers a major opportunity to rethink the implications using sport as way of facilitating better cultural and inter-religious relations in one of the most conflict torn regions in the world. This particular event will allow for the assessment of the impact of hallmark events on a social fabric that is maintained though abidance by specific religious norms. Some research has attempted to explore the potentially disruptive social transformation that participation by female athletes and spectators in sporting events could have on sociocultural sensitivities in Muslim countries (Amara and Brown 2008, Jahromi 2011). In contrast, this paper will outline the potentially constructive social transformation that a major sporting event could have on enhancing inter-religious relations. This paper will map out the areas of potential analysis for such a project. The paper will examine the institutional resources available for such an exercise and evaluate the policy effectiveness of the promotion activities associated with the Qatar World cup bid and the organisational process which aim to encourage better social diversity and interactions. The paper will outline details of a major project which will develop a framework of working with the key stakeholders to develop policy effective initiatives that can further strengthen a better understanding of inter-religious and inter-cultural relations. Our paper will draw upon analysis based on social survey research conducted in countries where large sporting events have been held in countries with significant religious heterogeneity. The findings from these surveys will be linked to future survey research in Qatar.
-
-
-
Personalizing the Museum Experience in Qatar
More LessIntroduction
Museum Personalization was identified as one of the six most important emerging trends for museums in 2015 by the Center for the Future of Museums.[1] It is an approach that focuses on the individual visitor rather than generic visitor groups and can be applied to a range of work areas including exhibit experiences, operations, marketing/communications and retail. The commonly perceived benefit of this approach is that visitors enjoy an enhanced or ‘tailored’ museum experience and museums/institutions gain longer-lasting and more meaningful relationships with their audiences. Simple forms of personalization have been employed by museums since the early days of digital experimentation in the 1980s, however, as priorities have increasingly shifted towards understanding and building audiences, this approach is offering exciting new opportunities and more sophisticated applications are now being explored. The potential for personalization has been significantly expanded by advances in technology over the past five years, especially in mobile phones, e-commerce, social media and wearable tech. Context two recent examples of museum personalization can be found at the Cooper Hewitt Design Museum in New York City and the Dallas Museum of Art. The Cooper Hewitt Design Museum reopened to the public in December 2014 with its ‘Smart Pen’, allowing visitors to collect their favorite objects and chart their activities over repeat museum visits. Within the first five months of operation the museum had recorded almost 1.4 million digitally collected objects and more than 54,000 new ‘visitor made designs’.[2] This has provided fascinating new insights into their collection. In 2012, the Dallas Museum of Art scrapped admission fees and introduced an innovative membership program that rewards individual visitors for participating in museum activities with digital badges and points. These can be redeemed for special rewards such as free parking, workshops, VIP events and film screenings. After two years of the membership program, overall attendance figures have increased by 29? and financial donations have risen by 19?.[3] Museums are not alone in developing personalization approaches and much inspiration has been taken from other industries such as e-commerce (personal retail recommendations), theme parks (wearable personalization devices), communications (social media aggregation), education (personalized learning e.g. Khan Academy and ClassDojo) and healthcare (customized health plans).
Research Purpose
Qatar Museums is the lead body for museums in Qatar with two open museums[4], three gallery spaces[5] and several new museums in development. The stated core purpose of the organization is to be ‘a cultural instigator for the creation generation’[6]. One of the key challenges to achieving this ambitious goal is the development of regular museum visiting habits in local audiences. This requires Qatar Museums to provide engaging museum experiences that attract visitors and cultivate long-lasting and meaningful relationships with those visitors. This paper explores the potential impact of museum personalization as a tool for achieving these goals.
Methodology
In evaluating the potential impact of museum personalization to museums in Qatar, this paper looks at three areas of research: An extensive review of international museums that use personalization approaches or techniques, focusing on both the benefits that they bring to museums and audiences as well as the challenges that they present. An evaluation of Qatar's technology landscape and how well suited this is to facilitate the introduction of personalization approaches in museums. An analysis of ongoing qualitative and quantitative audience research commissioned by Qatar Museums, focusing on Qatar-specific perceptions towards the use of technology. Summary of Key Findings International Review Summary – Through the extensive review of international museums using personalization techniques, the key benefits and challenges of museum personalization have been identified. These can be split into two categories: benefits for visitors; and benefits for museums/institutions. Main benefits for museum visitors: An increased sense of control and ownership Tailored content delivered at the best place and time. The ability to record and chart ongoing personal experiences and achievements Greater awareness of the ‘museum community’. A more comfortable and convenient visit Main benefits for museums/institutions: An enhanced reciprocal relationship between the museum and visitors Greater ability to identify changing audience needs Strengthened relationships with education professionals and community leaders Enhanced operational effectiveness Innovative new income generation streams. The research has also exposed some of the most common challenges and shortcomings of museum personalization – Over-complicated and unintuitive personalization systems. Creating unwanted distractions from other museum experiences. Unsustainable technical infrastructure Privacy and data protection concerns Qatar's Technology Landscape Summary – Technology is a key part of Qatar's 2030 National Vision and transformation into a knowledge-driven economy. Qatar's comparatively small population and geographic size has enabled the rapid development of cutting edge technology infrastructure such as fiber optic internet, 4G mobile phone networks and WIFI enabled public parks. There has also been considerable effort to develop e-government with 547 new digital services introduced since 2013 in areas such as healthcare, utilities, education and Islamic Affairs[7]. Qatar's advanced technology infrastructure is coupled with a healthy appetite for the consumption of technology goods and services. A 2013 survey revealed that the average household in Qatar owned three mobile phones, two computers and one smart phone[8]. Today's figures will undoubtedly be higher. Broadband internet penetration rates are at 85? placing Qatar alongside highly advanced economies such as South Korea and the United Kingdom.[9] Qatar's youth are leading the way with 98? of citizens aged between 15 and 24 using smartphones.[10] The advanced technology infrastructure in Qatar, together with the healthy appetite for technology goods and services, provide fertile conditions for the application of personalization approaches in Qatar's museums. Museum Audience Research Analysis Summary – Qatar Museums has an ongoing program of audience evaluation research that looks into a wide range of issues including general perceptions towards the use of technology. The research has indicated that there are positive links made between technology and creativity. ‘Everything creative now has something to do with technology’ – Qatari Mother[11]. However, it has also revealed some important concerns, particularly from parents regarding their children's perceived over-exposure to technology. Some also link this over-exposure to a general lack of physical exercise. ‘Our children do not have this energy and vitality, they are sitting at the computer all the time, they sit 8 hours, 5 hours, it is too much. When we were their age we would jump and hop and play, they do not do anything.’ – Qatari Mother[12] ‘I have to provide them with an education that benefits them. It cannot be play all the time; be it in malls, or using a Play station or a computer. Computers can be a means of distraction, of learning and of leisure’ – Arab Non-Qatari Mother[13] ‘I have a technology problem at home. All my children have laptops; even the 4 year old boy. My children spend all their time using this equipment, to the extent that my son of 4 refuses to go out because he prefers to complete his laptop session.’ – Jordanian Father[14] Although museum personalization is often presented as a way of counteracting the alienation created by technology, these genuine concerns of over-exposure will have to be carefully considered for any future development in Qatar.
Conclusions and Significance of the Research
This research provides strong evidence that museum personalization has the potential to have a significant and positive impact on the development of regular museum visiting habits in Qatar. It also exposes some important challenges that will have to be overcome through careful planning and extensive audience testing. Findings from this research will provide ideas and inspiration for Qatar's operational museums as well as opportunities to embed museum personalization into the development of new museum projects and audience engagement programs.
References
[1] Elizabeth Merritt, Trends Watch 2015, 2015, American Alliance of Museums.
[2] http://labs.cooperhewitt.org/2015/5-months-with-the-pen/
[3] https://www.dma.org/press-release/dallas-museum-art-s-dma-friends-program-home-100000-members.
[4] Museum of Islamic Art and Mathaf Museum of Modern Arab Art.
[5] Al Riwaq Gallery, QM Gallery 10 (Katara Cultural Village) and the QM Fire Station Artist in Residence Centre.
[6] QM Website – http://www.qm.org.qa/en/our-purpose.
[7] Qatar Digital Government, Executive Highlights, p. 1, 2015, ICT Qatar.
[8] Qatar's ICT Landscape, p.5, 2013, ICT Qatar.
[9] Ibid (p.10).
[10] Ibid (p.22).
[11] QM Future Audience Evaluation: Phase 2, Part 2, p.74, 2012, Qatar Museums.
[12] QM Future Audience Evaluation: Phase 2, Part 3, p.48, 2012, Qatar Museums.
[13] QM Future Audience Evaluation: Phase 2, Part 3, p.60, 2011, Qatar Museums.
[14] QM Future Audience Evaluation: Phase 1, p.25, 2011, Qatar Museums.
-
-
-
Health Self Determination Index
More LessThe Health Self Determination Index (HSDI) is a tool to measure the degree to which legal systems allow a person is able to make choices and decisions based on their own preferences and interests with regard to their health. The HSDI measures the degree to which a given regulatory environment recognize, guarantees and protects a self-determination with regard decisions affecting a person's health. The HSDI is intended to be global in scope, that is, to measure all countries around the world. The HSDI is empirically-based, that is, is based on data representing the state of affairs of health laws and regulations. Data come from four strategic areas – Abortion and contraception; – End-of-life; – Reproductive choices; – Access to regenerative medicine. The goal of the Index is to: – Provide an up-to -date tools to assess the degree of health self-determination – Raise awareness of how self-determination is treated differently around the world – Foster an evidence-based dialogue between policymakers, patients, the medical community and society on key issues affecting a person's life Current state of implementation – Complete dataset for 43 countries – Partial data set for over 100 countries – Available to the public at www.freedomofresearch.org – Preliminary data published Management – The Project is currently directed by Andrea Boggio, Associate Professor of Legal Studies at Bryant University (USA) – Structural is currently provided by the Associazione Luca Coscioni, a not-for-profit organization based in Rome (Italy) – Various activists, policymakers and academics are assisting the project as advisors – No person is currently remunerated for contributing to the Project Future directions – Expand the dataset to all countries around the world – Expand the Index to new areas of policy. Areas identified as priority are (1) medical use of marijuana and other illicit drugs, and (2) access to palliative care and pain management drugs – Maintain the current dataset up-to-date – Create a network of collaborators and advisors who guarantee the quality and growth of the project – Publish annual reports on the state of affairs of health self-determination Need for funding – Hire personnel to assist project development – Publishing annual report – Collecting data – Website and other structural costs – Approach: To truly advance freedom of research and treatment, we must learn how to measure it comprehensively and rigorously. In this spirit we build a multi-dimension grid of issues and sub-issues that enabled us to clarify and operationalized freedom in particularly contested areas of policy. Measurement questions: Monitoring and measuring freedom. Thinking about freedom as a matter of degree facilitates measurement. Actions are thus “free” on a scale from absolute prohibition to complete absence of constraints. Since law and other regulatory instruments constitute key sources of constraints in modern societies, we can monitor and measure freedom by reviewing the regulatory environment in which researchers, health care professionals and patients do research, provide health care, and seek treatment. This review allows us determine the degree to which we can say that actors are “free” to pursue the aspirations to expanding medical knowledge, foster patients’ well being and choose the best treatment.
Methodology
We adopted a multi-step methodology that is inspired by other efforts to build indexes and ranking in other domains of social life (human development, freedom of press, social progress, happiness, corruption, and economic freedoms). Identification key areas of medical research and treatment that raise important questions of freedom. Four areas that raise important questions of freedom, and thus can lead to key insights as to the degree of freedom that researchers, health care professionals and patients enjoy, were selected. These four areas are: Assisted reproduction technologies (ART); Research with human embryonic stem cells (hESC); End-of-life decisions; Abortion and contraception. Operationalization of the meaning of “freedom” in each of these key areas. To operationalize ‘freedom’ in each of the four areas of medical research and treatment, key regulatory conditions that constrain actors to some degree were identified. Furthermore, a list of questions was prepared for each of the areas. These questions capture the nature of these conditions and the degree to which the regulatory framework limits actors’ freedom to pursue the proper goal of each area of inquiry. Measurement Points (from 0 to 12) are assigned for each answer of each question. The highest score is allotted to the regulation that recognizes the highest degree of freedom. Progressively lower scores are assigned to less free environments, that is, regulatory environments that limit freedom moderately, severely, or entirely. The score 0 was assigned to blank prohibitions. If data are not available, the answer is not included in the calculation. For each country, we report the level of completion of data collection.
Data collection
To assign the points, each question needed to be answered. To this end, data were collected from various sources including primary sources (statutes and other regulatory documents) and secondary sources (scientific papers and policy reports). As of March 2014, we have completed data collection for 42 countries. Ranking Points were then added to a total. The result quantifies the degree of freedom that key actors enjoy while acting in each of the areas of medical research and treatment selected. The points of each area were then added and their total represents the score of each country. Countries were then ranked based on the overall score.
-
-
-
Qatari Media Roles in Maximizing the Qatari Soft Power and Overcoming the Glocal Social Challenges of Hosting World Cup 2022
More LessFrom the moment in 2010 when the World Cup was awarded to the tiny oil-rich nation with little soccer history, the event had been shrouded in controversy. Allegations of corruption abounded and investigations commissioned. But FIFA have repeatedly insisted that the competition would not be taken away from Qatar, even if that meant moving it from its traditional summer spot in the calendar. As things stand, the first ever World Cup in the northern hemisphere winter is a go, but that decision is only the start of a complex few years ahead for everyone involved. Since the controversial decision for Qatar to host the tournament, the opinions of both fans and those in the footballing community at large have mostly been against the decision. The president of the German football association said that the decision is “a burden for all of football.” Various writers have called the decision “FIFA's folly,” a “farce,” and a “disaster.” Others have said the award “make[s] no sense” and “you might as well hold the World Cup on Mars.” The plan is troubled by more than mere unpopularity—allegations of bribery and corruption have surfaced repeatedly over the five years since the vote was taken. The situation worsened when FIFA admitted it was likely that the tournament would have to be moved from its traditional June–July timeslot to sometime during the winter months due to the extreme summer heat in the region. This proposed move would have massive effects, both logistical and financial, on professional football leagues throughout the world. Shifting the 2022 Qatar World Cup to winter would be financially detrimental to professional football leagues and the many businesses that depend on them. The study employed a mixed descriptive – method design combining both content analysis of 1246 Qatari media items (including Qatari newspapers, television news and editorial material from 11 December 2010 till 30 September 2015) plus field study via semi structured interviews with 100 senior level experts and surveying 500 as a stratified random sample from diversified Qatari respondents. The study depended on well structured scales to measure media interest capacity and respondent's exposure and interaction. Qatar struggled to host this event either in the preparation stage or after winning the bid and the Qatari media concentrated only on the external aspect of hosting this event as reactive action instead of being proactive. The study depended on “issue attention cycle model” as a theoretical framework. The cycle has five stages as follows: (The pre-crisis stage, alarmed discovery, euphoric enthusiasm, realizing the true costs, declining interest and the post-crisis stage). The study wondered that Qatari media didn't cover the Qatari society internal challenges of hosting the world cup such as the cultural and social issues that contradict with Qataris religion, values and traditions. As Qatar is not accepting homosexuality, (LGBT) (Lesbian, Gay, Bisexual, Transgender/Transsexual community) or selling beer in the stadiums, it is expected to witness cultural conflict. In a very small country like Qatar, as there are no World Cup-ready stadiums, and entire cities that are necessary to host the event don't exist yet, all of the venues and stadiums need to be built from scratch. As we saw with the record $50-billion Sochi Olympics, building these things from scratch is an incredibly expensive and unpredictable enterprise. This will require massive infrastructure, up to $200 billion — four times the amount Russia spent on the historically expensive Sochi Olympics. Costs are already getting so out of control that Qatar will only build eight stadiums, as opposed to the 12 that were originally planned. This will impose certain challenges either in allocating fund for social, educational and infrastructure services or in creating terrible traffic jam in the preparation and hosting stages. The result of this traffic jam could lead to prevent or limit fans turnout. This of course will weaken the championship. The study hypothesized that there is a significant relationship between Qataris exposure to passive news about the world cup and their reluctant or refusal attitude toward hosting this event. The study suggested a comprehensive integrated communication strategy that Qatari media must adopt via issue attention cycle model in qualifying and orienting the Qatari society to deal with ethical, social and cultural challenges of hosting this event. The internal public opinion is vital not only in supporting the Qatari institutional efforts but also in guaranteeing the event success. Overcoming the internal challenges will enable Qatar from maximizing the social impact of a global event as equal as the economic benefits. This would be reflected positively on Qatar's Nation Branding Strategies as an extension to the Qatari extensive efforts in emerged mediating, peacemaking and humanitarian role all over the world.
-