Qatar Foundation Annual Research Forum Volume 2010 Issue 1
- تاريخ المؤتمر: 12-13 Dec 2010
- الموقع: Qatar National Convention Center (QNCC), Doha, Qatar
- رقم المجلد: 2010
- المنشور: ١٣ ديسمبر ٢٠١٠
81 - 100 of 166 نتائج
-
-
Signature changes in human brain wave activity associated with olfactory learning
المؤلفون: Abeer Raji M. Al ShammariAbstractPrevious animal studies have shown that olfactory learning modulates oscillatory activities in the mammalian olfactory system. In trained rodents, odour-induced oscillations in the gamma frequency band (30-80Hz) were specifically amplified in the olfactory bulb (OB) which was also associated with power increases in beta oscillations (15-30Hz) in both the OB and pyriform cortex (PC). However, there is still no evidence that these learning induced oscillations also occur in humans and that is one aim of this study. Additionally, we sought to determine if the drop in detection threshold for androstadienone due to increased sensitization also generalizes to the structurally similar androstenol. We also intended to find out if the induced sensitization to androstadienone results in changes in the perceived odour quality. Here, fourteen normal human subjects with low to intermediate sensitivity to androstadienone were selected for ten day scent trials. By using electroencephalography (EEG), oscillatory response due to androstadienone was predominately recorded in four brain regions on days 0, 3, 7 and 10. The induced oscillations were measured in the OB, PC, right and left frontal hemispheres. A power spectrum technique was used to analyze EEG responses in the gamma and beta frequency bands. Our results showed that learning-induced sensitization to androstadienone amplified gamma power in the OB, however beta oscillations were only enhanced in the PC. Exposure-induced sensitization to androstadienone also generalized to androstenol demonstrating plasticity in the human olfactory system. The induced learning was accompanied with significant changes in the perceived familiarity and intensity of androstadienone. As a whole this is the first study to demonstrate that olfactory learning in humans is associated with an increase in gamma oscillatory power in the OB and beta oscillations in the PC. This might indicate that gamma oscillations are switched to beta waves as they travel a significant distance to the PC.
-
-
-
A mouse model analyzing the influence of dietary fat intake on liver apoptosis
المؤلفون: Ahmed Hamad Al Saie, Robert Weiss, Erin Daugherity and Kirk MaurerAbstractDietary fat intake is associated with hepatobiliary cancers which carry a poor prognosis causing over 20,000 deaths per year in the US alone. We hypothesized that excess lipid accumulation in the liver promotes hepatic cancer through inflammation and oxidative DNA damage. In order for eukaryotic cells to protect genomic integrity, the protein kinase ataxia telangiectasia mutated (ATM) responds to oxidative DNA damage and DNA strand breaks. Failure to activate ATM following damage leads to defective cell cycle control and impaired DNA repair. In this study, ATM knockout mice were used as a sensitized background to assess the effects of oxidative stress and DNA damage associated with hepatosteatosis. ATM-deficient and control mice were fed a high fat diet for eight weeks, and then liver tissue was analyzed for apoptosis. It was expected that more apoptotic cells would be found in ATM-deficient mice fed the high fat diet than in control mice due to DNA break repair deficiencies. Liver tissues were sectioned and stained by TUNEL assay, a method for detecting DNA fragmentation that occurs during apoptosis. The TUNEL immunohistochemistry protocol was first optimized for hepatocytes. Positive cells were counted in multiple 40X fields from each tissue section. Statistical differences between groups were determined by comparison of the fraction of positive cells. There were more apoptotic cells in livers from mice fed a high fat diet relative to those fed the normal diet. Interestingly, among mice fed the high fat diet there was a slight decrease in apoptosis in ATM deficient mice relative to controls. Although this difference did not reach statistical significance, this may indicate that ATM is required for inducing hepatic apoptosis in response to stresses associated with increased dietary fat consumption. Additional staining for cell proliferation as well as DNA damage response activation will be performed in the future. This study will begin to elucidate the interaction between lipid metabolism, oxidative stress, DNA damage and hepatobiliary oncogenesis and will establish a new mouse model that will provide a powerful tool for future mechanistic and translation studies of hepatobiliary disease.
-
-
-
CameraNets: coverage and data management problems in distributed smart camera networks
المؤلفون: Nael Abu-GhazalehAbstractDistributed Smart Camera systems (DSCs) consist of a (possibly large) number of cameras that collaborate on a monitoring task. DSCs have a wide range of applications such as surveillance, intelligent traffic systems, environmental monitoring, industrial safety and law enforcement. DSCs automatically control what to monitor and how to act on the collected video. For example, cameras monitoring traffic may change their orientation to track moving traffic and alert responders if an accident occurs.
DSCs differ from conventional multi-camera surveillance systems in that they eliminate the need for a human to control them and to interpret the video. Free of this limitation, DSCs can scale to much higher scales, while improving monitoring effectiveness. However, a number of difficult challenges must be solved before DSCs can realize this potential. In this paper, we discuss our results and activities within a QNRF-funded project looking at two general challenges facing DSCs:
1 - Coverage control: how to control cameras with Pan-Tilt-Zoom capability to track a group of targets and/or areas of interest. We frame the problem as an optimization problem to maximize the value of the covered targets. We show that the problem is NP-hard and develop a family of heuristic approaches with near optimal behavior that do not require central coordination.
2 - Data Management: as the cameras collect their video, they need to relay it for real-time monitoring using a bandwidth constrained network, or store it for later analysis. Cameras can coordinate to eliminate redundancies and to infer the importance of the observed video. Moreover, storage architectures are needed to effectively store the video data and to allow efficient indexing and retrieval.
We report our initial solutions and performance evaluation studies in this area, which were obtained by a mixture of simulation and using an experimental multi-camera test bed that we have started to deploy.
-
-
-
Exploiting social interactions using opportunistic networks
المؤلفون: Mtibaa Abderrahmen and Khaled HarrasAbstractSocial interaction has drastically evolved over time. Moving away from face-to-face based interactions, telephone networks made the first step towards remote social interaction. The internet, further enhanced with the tremendous increase in lightweight mobile devices, has taken social interaction to new frontiers. Users can already email, chat, call, and video conference with others from around the world without necessarily being attached to any fixed location. The final frontier has been to exploit this technology to completely virtualize social interaction via online social networking services such as facebook, orkut, MySpace, or LinkedIn, etc. These applications create a virtual world where users build social networks of their acquaintances and allow people belonging to these social networks or communities to freely interact regardless of the boundaries of time and location. At this point, we pose a simple question, is this truly the final frontier with respect to social interaction?
When people with similar interests or common acquaintances are a short distance from one another, like the same street, train, or mall, these people have no mechanism to identify this potential social interaction. Current research in geolocalization applications running on mobile devices provide some solutions to such problems, however, they face numerous challenges including network coverage, cost, and energy consumption concerns. We ultimately need context aware, adaptive, and agile solutions that can seamlessly extend peoples senses beyond their physical boundaries in order to exploit potentially rewarding social interactions.
Our work, targeted towards fulfilling this need, takes advantage of physical context merged with online social relationships to ultimately improve the physical social interaction experience of people. We will discuss how current research thrusts such as delay and disruption tolerant networks (DTNs) and mobile opportunistic networking, can exploit social relationships between people in order to efficiently disseminate messages through such challenged networks. We believe that these types of networks better model and reflect human mobility pattern and limitations, and so are more naturally suited as a platform for tackling the problems mentioned above.
-
-
-
Effective programming for large distributed ensembles
المؤلفون: Iliano CervesatoAbstractClaytronics is a project at Carnegie Mellon University to develop programmable matter – bringing the power of programming to physical matter. A Claytronics system consists of millions of tiny computing units called catoms. Each catom is capable of executing code, sensing and communicating with nearby catoms, and moving around its neighbors subject to the laws of physics. The result is an ensemble of particles which can change their physical properties under program control.
Surprisingly, the main challenge to realizing Claytronics is not the underlying hardware, but the programming methodology. Providing effective methods for programming an ensemble of millions of units so that they can reliably, even provably, work together to solve a common task is a significant challenge. We have developed two programming languages, LDP and Meld, with which we could implement some simple, yet useful, behaviors. Each language excelled at some tasks, but not on others. Furthermore, it is unknown whether either language or even their underlying programming styles will scale to large programs.
In this work, we investigate the potential of MSR 3, a programming paradigm combining multiset rewriting, logic and process algebra, as an effective basis for programming Claytronics. MSR 3 natively provides support for concurrency, synchronization, non-determinism, non-monotonicity, and atomicity. It has been used with great success in areas as security protocols and biomolecular systems. MSR 3 appears to extend the computing paradigms of both LDP and Meld. Our main objective is to customize MSR 3 for Claytronics, to support a variety of abstraction levels (from modeling the physical environment to programming meta-modules).
We believe that this will reap rewards not just for programming Claytronics, but will have a direct impact on understanding how best to program all large ensembles, sensor networks, internet protocol routers, autonomous vehicles, power system management, etc.
-
-
-
An integrated platform for intelligent road traffic monitoring and travel information delivery
المؤلفون: Fethi FilaliAbstractCurrently in Qatar, and to the best of our knowledge in most Gulf state countries, there is a lack of reliable information about traffic conditions and congestion. This information, especially if ubiquitous and near real time, is highly desirable to support consumer, enterprise, and government centric applications. Since no universal solution exists, a great deal of innovative research is needed. This research work aims at designing and developing an integrated intelligent platform for real time monitoring of road traffic, based on advanced data processing and filtering algorithms. Four sophisticated blocks compose the platform. The Data Sources block is responsible for generating raw traffic data, for example from road sensors. The Platform Core block is where the Platform Engine and Platform Services are implemented. It processes raw traffic data and translates it into meaningful real time information, which is then delivered to user applications in different formats and via various fixed and mobile end‐user devices either in real time or playback mode. The User Applications block contains the set of applications interacting with the platform. Finally, the whole platform is configured and controlled via the fourth block: the Platform Administration and Management block. Given the importance and complexity of the addressed problem, a great deal of research and development effort has been conducted to create a robust, efficient, and rich intelligence platform supporting a large number of services and applications. The research efforts focused on geographical data preparation, communication protocols with remote data sources, speed and travel‐time estimation and prediction, raw data filtering and fusion, map matching, and shortest and fastest routes computation. It is well accepted that reducing mobility time means reducing people stress and enhancing produced results qualitatively and quantitatively. We believe that the platform's services will contribute in reaching this objective in Qatar and in the region.
-
-
-
Interference-aware protocol design in wireless networks
المؤلفون: Saquib RazakAbstractWireless networking is enabling a new class of applications providing users with access to information and communication anytime and anywhere. The success of these applications and services, accessible through smart phones and other wireless devices, is placing tremendous pressure on the limited wireless bandwidth. To sustain this growth, it is critical to develop protocols that can efficiently manage the available bandwidth.
One of the major complications in developing these wireless protocols is the complex effect of interference between different users, which often plays a defining role on the overall performance of wireless networks. Thus, the goal of this project is to characterize the interference behavior and use it to develop a new generation of protocols that focus on minimizing destructive interference.
We focus on Carrier Sense Multiple Access (CSMA), the most commonly used algorithm in wireless networks at the core of widespread standards such as IEEE 802.11 (WiFi). These protocols are unable to effectively arbitrate the medium in multi-hop wireless networks, causing destructive interactions such as hidden and exposed terminals, leading to collisions, poor performance and unfairness. This project first characterizes the impact of interference in detail, showing that there are only a few modes of interference that account for the different interactions that occur when multiple users compete for use of the medium.
We then use this insight to develop novel protocols using two main strategies: (1) remove destructive interference whenever possible; and (2) find alternative routes around destructive interference areas when removing interference is not an option. For the first part our methodology controls transceiver parameters like transmit power, receiver threshold and receiver sensitivity to convert the destructive interactions into constructive ones. Our results show that this technique reduces the overall power consumed in a network allowing for better channel reuse and hence efficient capacity usage. For the latter part, we are designing a routing protocol that routes traffic around areas of potentially high interference. A comparison of our protocol with existing shortest-path routing protocol shows that our metric substantially improves the performance and efficiency of the network.
-
-
-
Design and analysis of new generation protocols for triple-play networks
AbstractTransmission Control Protocol (TCP)'s proven stability and scalability has made it the most widely used transport layer protocol for more than twenty years. However, as multimedia applications become ubiquitous over the internet, TCP has been found incapable of meeting their requirements, which place more emphasis on timeliness than on reliability. Because of that, many multimedia applications turn to UDP as their underlying transport protocol. However, the majority of video on demand and live broadcast applications predominantly use TCP over UDP (User Datagram Protocol), due to UDP's unresponsiveness to network conditions and problems with firewalls and NATs (Network Address Translations).
TCP's poor performance in delivering real-time media is due to the following reasons: 1) TCP's emphasis on reliable in-order delivery causes frame jitter that interrupt media play out. 2) TCP's coarse-grained retransmission timeout (RTO) and its back-off mechanism is detrimental to any real-time based application.
In this study, we propose a new variant of TCP with an early retransmission scheme as an enhancement to make it more suitable for streaming media. We call this new protocol TCP-ER. We performed extensive NS-2 simulations to show that: 1) the early retransmission scheme can reduce the number of retransmission timeouts in a variety of network environments, which results in a considerable decrease in number of retransmission timeouts and packet delay jitter. 2) Under same network conditions, constrained streaming over TCP-ER has a considerably lower number of late packets than its normal TCP counterpart. 3) TCP-ER has a higher throughput in severely congested network conditions, whereas it stays relatively fair with typical TCP implementations (specifically TCP-SACK) as congestion gets alleviated.
-
-
-
Qloud: a cloud computing infrastructure for scientific applications
المؤلفون: Sakr Majd, Suhail Rehman, Qutaibah Malluhi, Hussein Alnuweiri and Mazen ZaghirAbstractCloud computing is a disruptive technology that is rapidly changing how organizations use and interact with information technology. By transforming computing infrastructure from a product to a service, it offers many benefits, including scalability of resources, flexibility for users in terms of software and hardware needs, increased reliability, decreased downtime, increased hardware utilization and reduced upfront costs and carbon footprint. Academia and research organizations are now actively involved in bringing some of those benefits to high performance and scientific computing. The Qatar Cloud Computing Center – Qloud research initiative brings Carnegie Mellon, Texas A&M, and Qatar University together to explore cloud computing to further research and development of cloud computing in Qatar and exploit it for regionally relevant scientific applications.
In partnership with IBM, two pilot cloud systems have been put in place, one on the CMUQ campus in early 2009, and another on the QU campus in 2010. These systems are available for educational and experimental use for researchers, students and faculty in Qatar. Further, an introductory course in cloud computing was held in the spring 2010 semester to equip computer science students with necessary skills to work with this new computing paradigm.
The Qloud research focuses on porting scientific applications to the cloud. Large-scale data-intensive applications can reap the benefits of cloud computing and programming models such as MapReduce. However, there is a lack of understanding of the performance implications of executing scientific applications in cloud environments, which is an impediment to increased adoption of cloud computing for these purposes. In our research, we explore the performance and behavior of various classes of scientific applications in a cloud computing environment. Specifically, we are studying the effect of provisioning variation, a variation in the performance of an application caused by the variation of resource allocation in a cloud computing environment. Our initial findings indicate that for certain application types, we observe a five-fold variation in performance between a best-case and worst-case resource mapping in our private cloud environment. This research can help in building new frameworks to support scientific computation on the cloud.
-
-
-
Designing a new programming language for building securecloud computing-based applications
المؤلفون: Thierry Sans and Iliano CervesatoAbstractIn 2009, Carnegie Mellon Qatar, Qatar University, Texas A&M Qatar and IBM launched a joint research project on cloud computing. Cloud computing is a computing paradigm in which the computing resources, the software and the data are made available to the users as a service through the internet. In this paradigm, the software is no longer a standalone application installed on the user's platform, but resides on one or several servers. For instance, Google Docs is an office suite (word processor, spreadsheet and presentation) that can be used through a web browser. This new kind of application is a radical shift in the way we design, implement and deploy software. In this context, ensuring security becomes critical since a vulnerability in a cloud-based application may exposed data of all users using the service. Yet, developing secure cloud applications is complex because programmers are required to reason about distributed computation and to write code using heterogeneous languages, often not originally designed with distributed computing in mind. Testing is the common way to catch bugs and vulnerabilities as current technologies provide limited support. There are doubts this can scale up to meet the expectations of more sophisticated cloud-based applications. In this project, we have designed a type-safe programming language called “Qwesst”. We used it to express interaction patterns commonly found in distributed applications that go beyond current technologies. This language prevents the programmer from writing unsafe code that can lead to a cross site scripting attack, also called XSS attacks. An XSS attack enables an attacker to inject JavaScript code into a webpage. This is a severe vulnerability that has become the most widespread security breach in web-based applications. In the future, we plan to extend the language with new security features that will allow the programmer to control data dissemination and information flow.
-
-
-
What do drill strings and surgical threads have in common?
المؤلفون: Annie RuimiAbstractDrill strings used in oil and gas operations are long circular columns approximately 3 to 5km long, 30 to 50cm in diameter while surgical threads are typically 75cm to 1m long and 0.5 to 1mm thick, depending on the type of surgery, so both share the characteristic of having a diameter to length ratio on the order of 10-3. Drill string operators need to constantly monitor the position of the drilling apparatus as excessive vibrations can lead to sudden equipment failure. Likewise, a surgeon would want to avoid thread tangling, a non-linear and dynamical process particularly detrimental during knot formation.
The elementary Euler-Bernoulli, or even the Timoshenko beam theory, are insufficient to predict the correct configuration of the structures which will coil, i.e. twist around their own axis in addition to bend and twist. Instead, we will use finite element computational tools using the lesser-known Cosserat theory of rods.
In the case of surgical thread, the goal of our research program is the development of software that will be used by medical school students to practice the task of surgical suturing so the program's immediate benefits are pedagogical and also in line with the Qatar Sidra project to offer state of the art medical training.
In the case of drill string dynamics, the objective of our program is to understand the interactions between the vibration sources and drill string-BHA (bottom hole assembly) responses and to offer “real time” assistance to drilling rig operators by developing advanced dynamics simulation software. With such high associated operational costs, the anticipated benefits of the program are clearly economical.
By engaging simultaneously in these two research programs, we hope to demonstrate that the Cosserat rod theory is a powerful tool that can be used to solve a wide range of applications that may otherwise appear very distant.
-
-
-
Qatar simulator development programme
المؤلفون: Max-Antoine Jean RenaultAbstractIt is official. The automotive world is ramping up capabilities in simulation. Applications range from motorsports (optimization of vehicle dynamics, race track familiarization, car engineering), to driver-assistance systems (development of vehicle dynamics controllers), utilizing software in the loop (SIL) and hardware in the loop (HIL) validation in e.g. electronic control units (ECU). Another major emerging market is driver safety and training, e.g. emergency services and driver training centers.
Using HIL, simulation helps develop increasingly complex embedded systems, connect them to car hardware, test and ensure correct functionality and integration. Time-consuming manual testing has been replaced by automated simulation. When done in a pre-production phase, time-to-market and expensive recalls are considerably minimized.
Using driver in the loop (DIL), simulation provides a consistent and safe driving environment for drivers to gain or improve skills. In motorsports this saves track-time related costs and helps gain a competitive advantage; in the commercial world drivers become more successful dealing with hazards, while interacting with in-car functions, thus minimizing the risk of accidents or fatal injuries.
The Williams Technology Centre is engaged in developing driving simulators in the three key areas of motorsports, entertainment and road safety & training. We benefit from years of F1 simulator experience, with excellent understanding of vehicle dynamics and driver training needs. Our capabilities in automotive SIL, HIL and DIL are extensive. By combining in-house developed software, real-car hardware, and outstanding audio & visual graphics, our simulators are incredibly high-fidelity.
Current efforts concentrate on developing a DIL motorsport simulator incorporating real electro-mechanical car parts, and running on advanced software. Following extensive research into the human sensory system, we are pioneering an innovative visual environment to enhance driver immersion.
Research & development endeavors from 2011 will focus on further advancing high-fidelity control loading steering systems, growing our HIL capability, and sophisticated motion-cueing development. Artificial intelligence and scenarios will equally be at the heart of further expansion.
-
-
-
Named entity recognition from Arabic Wikipedia
المؤلفون: Mohit Behrang, Kemal Oflazer and Noah SmithAbstractNamed Entity Recognition (NER) is the problem of locating mentions to entities such as persons, locations and organizations. The named entity information is helpful for reducing the complexity of monolingual and multilingual processing tasks, such as information extraction, parsing and machine translation. We investigate the Arabic NER problem from the Arabic Wikipedia text. We employ statistical sequence labeling methods for solving the NER task. Previous studies suggest that sequence labeling methods, such as Conditional Random Fields, are the state of the art NER frameworks.
The sequence labeling methods require human labeled training data. Most ofthe Arabic human labeled data for NER belong to the political news domain and the consequent trained models are biased towards the news domain. In contrast, our target test data (Arabic Wikipedia articles) has a very diverse set of topics. The domain mismatch between the train and test data results in poor NER performance.
In order to reduce the coverage problem, we present three techniques: (1) we use the Wikipedia network structure to collect additional information about the text. Information such as monolingual and cross-lingual hyperlinks and text formatting lead us to use new features of the Wikipedia text in NER models. Moreover, we use cross-lingual projection to collect named entity information from English Wikipedia. (2) We use a domain adaptation technique to shift the model from the baseline political domain to domains relevant to our test data. Our model adaptation uses a small set of in-house-labeled Arabic Wikipedia articles. (3) We use self-training to port from a fully supervised to a semi-supervised learning framework: we collect a large volume of unlabeled Arabic Wikipedia articles to expand the underlying NER domain to new text domains. Our model expansion is gradual and iterative. In each iteration we add a new set of unlabeled articles to the training and use the current model to label and construct a larger model.
Our NER evaluations are based on the standard precision and recall metrics.We evaluate our proposed framework in four different text domains ofArabic Wikipedia.
-
-
-
A second-order statistical method for spectrum sensingin correlated shadowing and fading environments
المؤلفون: Serhan Yarkan and Khaled QaraqeAbstractSpectrum sensing is one of the most important tasks of cognitive radios (CRs) in future wireless systems and of user equipment (UE) in next generation wireless networks (NGWNs). Therefore, deciding whether a specific portion of radio frequency (RF) spectrum is occupied or not is of paramount importance for all sorts of future wireless communications systems. In this study, a spectrum sensing method that employs a second–order statistical approach is proposed for detecting fast fading signals in spatially correlated shadowing environments. Analysis and performance results are presented along with the discussion related to the performance comparison of the energy detection method.
-
-
-
Characterization of the indoor/outdoor radio propagationchannel at 2.4 GHz on Qatar University campus
المؤلفون: Irfan AhmedAbstractThis technical report presents the site-specific signal strength measurement results for path loss, shadowing, and fading in the 2.4GHz band under typical harsh environment (high temperature 40-50 C and humidity 80-90%). We used spectrum analyzer Rohde & Schwarz FSH8 and InSSIDer, free software for wireless local area networks (WLANs). Measurements were taken in indoor and outdoor environments at various locations at different times of the day. An empirical channel model has been derived from these measurements that characterizes the indoor-outdoor wireless channel. This report provides information that would be useful for the design and deployment of wireless mesh network in Qatar University.
For a radio communication system, the channel describes how the electromagnetic propagation of a transmitted signal provides that signal at the receiver. In a mobile communication system, the channel changes according to the movement of the communicating entities and other objects that have an effect on the electromagnetic fields at the receiver.
In the last decade, most of the indoor wired networks have been replaced by wireless networks. These networks can also provide outdoor connectivity inside the campus areas. WLANs based on IEEE 802.11 are largely deployed to provide users with network connectivity without being tethered to a wired network.Wireless networks can provide nearly the same services and capabilities commonly expected with wired networks. Like their wired counterparts, IEEE 802.11 has been developed to provide large bandwidth to users located in indoor and outdoor campus environments and are being studied as an alternative to the high installation and maintenance costs incurred by traditional additions, deletions, and changes experienced in wired LAN infrastructures. Because of the unlicensed spectrum availability, IEEE 802.11 WLAN devices operate in the ISM (Industrial Scientific Medical) band at 2.4GHz or 5GHz. For an accurate planning of indoor/outdoor radio networks the modeling of the propagation channel is required.
-
-
-
An initial study of the structural phase transition of SrTiO3
المؤلفون: Fadwa El Mellouhi, Edward Bothers, Gustavo Scuseria and Melissa LuceroAbstractSrTiO3 (STO) is a complex oxide perovskite of great technological interest for its superconductivity, blue-light emission and photovoltaic effect. In normal conditions, SrTiO3 crystallizes in the cubic Perovskite structure and undergoes a second-order phase transition to a tetragonal structure known as the antiferrodistortive (AFD) phase of STO at the critical temperature Tc = 105 K. The AFD phase of STO can appear near the interfaces at much higher temperatures if STO is used as a substrate for the growth of thin films or superlattices with other perovskites. In the last decades, both phases of STO have been extensively studied with different schemes of ab initio calculations, but none of the previously published work has been able to give, at the same time, an accurate estimate of the structural and electronic properties of the cubic and AFD phases of STO. In this work, we use Gaussian 09 to fully explain the reason behind this failure using a large spectrum of functionals ranging from pure DFT functionals like LDA and GGA to more modern and complex hybrid functional like HISS and HSE06. We also show how the quality of the basis set compete with the functional effect in predicting the properties of STO, the strongest competition being observed for the AFD phase. In fact, basis sets of low quality tend to seriously inhibit the tetragonality of the AFD phase and sometimes even suppress it. On the other hand, pure DFT functionals tend to overestimate the tetragonality of the AFD phase in agreement with previously reported results in the literature using basis sets of comparable quality. Hybrid functionals predict the structural properties of the cubic and AFD phase in very good agreement with experimental results, especially if used with high quality basis sets. Thus, we present the most reliable combination of functional and Gaussian basis set for STO currently computationally tractable. This combination gave the best agreement with the experimental structural and electronic properties for the cubic and the AFD phases of STO. It is accurate enough to enable us to understand the changes in the band structure during the cubic to AFD phase transition, predict the carrier densities, find the activation barriers for the formation and mobility of defects and the magnetic ordering.
-
-
-
Data structures and algorithms in pen-based computingenvironments
المؤلفون: Victor AdamchikAbstractData structure visualization (or animation) has been studied for more than twenty years, though existing systems have not gained wide acceptance in the classroom by students and their instructors. The main reason is that animation preparation is too time consuming. A more technical reason is that when a particular data structure is encoded into an animation, it does not have the flexibility often needed in a classroom setting. There is also a pedagogical reason: a number of prior studies have found that using algorithm visualization in a classroom had no significant effect on student performance. We believe that the tablet PC, empowered by digital ink, will challenge the current boundaries imposed upon algorithm animation. One of the potential advantages of this new technology is that it allows the expression and exchange of ideas in an interactive environment using sketch-based interfaces. In this paper we discuss teaching and learning tablet PC based environment in which students using a stylus would draw a particular instance of a data structure and then invoke an algorithm to animate over this data structure. A completely natural way of drawing using a digital pen will generate a data structure model, which (once it is checked for correctness) will serve as a basis for execution of various computational algorithms.
In the future, we will extend the above visualization tool to a hybrid theorem prover system. Experience shows that many computer science students have great difficulties with the proofs methods encountered in, say, an advanced course on algorithms. Indeed, often the logical foundation of a proof argument seems to escape some of the students. We propose to transform students’ experience with proofs by incorporating pen-based technology into introductory computer science courses. In particular, we consider formal proofs in Euclidean geometry. The cornerstone of this model is the concept of geometrical sketching, dynamically combined with an underlying mathematical model. A completely natural way of drawing using a digital pen will generate a system of polynomial equations of several variables. The latter will be fed to a theorem prover, based on the Grőbner bases technique, which will automatically establish inner properties of the model. Moreover, once a particular mathematical model is created and then checked for accuracy, it will serve as a basis for logical deduction of various geometrical statements that might follow. Finally, a detailed step-by-step exposition of the proving process will be provided.
-
-
-
Nanoscale Brownian motion-based thermometry in near wall region
المؤلفون: Anoop Kanjirakat, Rana Khader and Reza SadrAbstractIn nanoparticle image velocimentry (nPIV), evanescent wave illumination is used to measure near-wall velocity fields with an out-of-plane resolution of less than 200nm. Similar methodology can be extended for temperature measurements using Brownian motion characteristics of the sub-micron tracer particles in this region. Temperature change affects Brownian motion of tracer particles through a change in Brownian diffusion coefficient and a change in viscosity. The present study tries to numerically investigate the possibilities of utilizing this effect in near-wall thermometry. Synthetic nPIV images of the illuminated particle tracer of 100nm diameter are initially generated. The spatial distribution of the particles takes in to account near wall forces such as buoyancy, electrostatic repulsion and Van Der Waals attraction, in addition to the hindered Brownian motion. Validation studies are carried out using stationary liquids at constant temperatures. It is believed that this observation would help in explaining the anomalous heat transfer characteristics of nanofluids.
-
-
-
ParaNets: a parallel network architecture for the future internet
المؤلفون: Khaled Harras and Abderrahmen MtibaaAbstractThe evolution of networking technologies and portable devices has led users to expect connectivity anytime and everywhere. We have reached the point of seeing networking occur underwater, via aerial devices, and across space. While researchers push the true boundaries of networking to serve a wide range of environments, there is the challenge of providing robust network connectivity beyond the boundaries of the core internet, defined by fiber optics and well-organized backbones. As the internet edges expand, the expectation is that connectivity will be as good, in terms of high bandwidth and minimal interruption, as anywhere in the core. Such an expectation contradicts the inherent nature of connectivity at the edges.
Researchers have been trying to solve this problem primarily by layering more network connection opportunities using newer technologies such as WiFi, WiMax, and cellular networks. The result is not better robustness, just more of the same. The choice of which network to use is somewhat dependent on location, partially driven by economics, and ultimately decided by the user.
Our goal is to create a research thrust that builds robust networking at the edge of the internet by integrating various network technologies. These technologies should ultimately enable users to more seamlessly connect to the internet. Mobile devices and the applications running on them are currently incapable of identifying various potential communication opportunities and seamlessly utilizing them in order to maximize throughput. Furthermore, these applications should be capable of utilizing these connection opportunities in parallel, be resilient to disruptions, and optimize this utilization pattern to rising cost and energy concerns.
This overall objective requires fundamentally re-working the internet's connectivity model to exploit the array of networking opportunities and evolve the traditional protocol stack to a more dynamic plug-and-play stack.
-
-
-
Discrimination thresholds of virtual curvature for hapticand visual sensory information and future applicationsin medical virtual training
المؤلفون: Jong YoonAbstractThe senses of vision and touch are vital modalities used in the discrimination of objects. Recent advances in human-computer interface technologies have produced various haptic force feedback devices for the industries of rehabilitation, information technology, entertainment, and more. In this research effort, an inexpensive stylus-type haptic device is used to determine thresholds of concave curvature discrimination in visual-haptic experiments. Discrimination thresholds are found for each sense independently as well as for combinations of these with and without the presence of conflicting information.
Results indicate that on average, the visual sense is about three times more sensitive than the haptic sense in discriminating curvature in virtual environments. It is also noticed that subjects seem to rely more heavily on the sense that contains the most informative cues rather than on any one particular sense, in agreement with the sensory integration model proposed by other researchers. The authors believe that the resulting thresholds may serve as relative comparisons between perceptual performance and this study may be further expanded to audio and texture senses supported by the Undergraduate Research Enhancement Program (UREP) of the Qatar National Research Fund. It is also noted that these preliminary studies will constitute a valuable asset to the medical virtual training research and development.
-