- Home
- Conference Proceedings
- Qatar Foundation Annual Research Forum Proceedings
- Conference Proceeding
Qatar Foundation Annual Research Forum Volume 2011 Issue 1
- Conference date: 20-22 Nov 2011
- Location: Qatar National Convention Center (QNCC), Doha, Qatar
- Volume number: 2011
- Published: 20 November 2011
181 - 200 of 281 results
-
-
Characterizing Scientific Applications on Virtualized Cloud Platforms
More LessAbstractIn general, scientific applications require different types of computing resources based on the application's behavior and needs. For example, page indexing in an Arabic search engine requires sufficient network bandwidth to process millions of web pages while seismic modeling is CPU and graphics intensive for real-time fluid analysis and 3D visualization. As a potential solution, cloud computing, with its elastic, on-demand and pay-as-you-go model, can offer a variety of virtualized compute resources to satisfy the demands of various scientific applications. Currently, deploying scientific applications onto large-scale virtualized cloud computing platforms is based on a random mapping or some rule-of-thumb developed through past experience. Such provisioning and scheduling techniques cause overload or inefficient use of the shared underlying computing resources, while delivering little to no satisfactory performance guarantees. Virtualization, a core enabling technology in cloud computing, enables the coveted flexibility and elasticity yet it introduces several difficulties with resource mapping for scientific applications.
In order to enable informed provisioning, scheduling and perform optimizations on cloud infrastructures while running scientific workloads, we propose the utilization of a profiling technique to characterize the resource need and behavior of such applications. Our approach provides a framework to characterize scientific applications based on their resource capacity needs, communication patterns, bandwidth needs, sensitivity to latency, and degree of parallelism. Although the programming model could significantly affect these parameters, we focus this initial work on characterizing applications developed using the MapReduce and Dryad programming models. We profile several applications, while varying the cloud configurations and scale of resources in order to study the particular resource needs, behavior and identify potential resources that limit performance. A manual and iterative process using a variety of representative input data sets is necessary to reach informative conclusions about the major characteristics of an application's resource needs and behavior. Using this information, we provision and configure a cloud infrastructure, given the available resources, to best target the given application. In this preliminary work, we show experimental results across a variety of applications and highlight the merit in precise application characterization in order to efficiently utilize the resources available across different applications.
-
-
-
Human-Robot Interaction in an Arabic Social and Cultural Setting
Authors: Imran Fanaswala, Maxim Makatchev, Brett Browning, Reid Simmons and Majd SakrAbstractWe have permanently deployed Hala; the world first's English and Arabic Robot Receptionist for 500+ days in an uncontrolled multi-cultural/multi-lingual environment within Carnegie Mellon Qatar.
Hala serves as a research testbed for studying the influence of socio-cultural norms and the nature of interactions during human-robot interaction within a multicultural, yet primarily ethnic Arab, setting.
Hala, as a platform, owes its uptime to several independently engineered components for modeling user interactions, syntactic and semantic language parsing, inviting users with a laser, handling facial animations, text-to-speech and lip synchronization, error handling and reporting, post dialogue analysis, networking/interprocess communication, and a rich client interface.
We conjecture that disparities in discourse, appearances, and non-verbal gestures amongst interlocutors of different cultures and native tongues. By varying Hala's behavior and responses, we gain insight into these disparities (if any) and therefore we've calculated: rate of thanks after the robot's answer amongst cultures, user willingness to answer personal questions, correlation between language and acceptance of robot invites, the duration of conversations, effectiveness of running an open-ended experiment versus surveys.
We want to understand if people communicate with a robot (rather, an inanimate object with human-like characteristics) differently than amongst themselves. Additionally, we want to extrapolate these differences/similarities while accounting for culture and language. Our results indicate that users in Qatar thanked Hala less frequently than their counterparts in the US. The robot often answered personal questions and inquiries (for ex: her marital status, job satisfaction, etc); however, only 10% of the personal questions posed by the robot were answered by users. We observed a 34% increase in interactions when the robot initiated the conversation by inviting nearby users, and the subsequent duration of the conversation also increased by 30%. Upon bringing language into the mix, we observed that native Arabic speakers were twice more likely to accept an invite from the robot and they also tended to converse for 25% longer than other cultures.
These results indicate a disparity in interaction across English and Arabic users thereby encouraging the creation of culture specific dialogues, appearances and non-verbal gestures for an engaging social robot with regionally relevant applications.
-
-
-
Overcoming Machine Tools Blindness by a Dedicated Computer Vision System
Authors: Hussien J Zughaer and Ghassan Al-KindiAbstractAlthough Computerized Numerical Controlled (CNC) machines are currently regarded as the heart of machining workshops they are still suffering from machine blindness, hence, they cannot automatically judge the performance of applied machining parameters or monitor tool wear. Therefore, parts produced on these machines may not be as precise as expected. In this research an innovative system is developed and successfully tested to improve the performance of CNC machines. The system utilizes twin-camera computer vision system. This vision system is integrated with the CNC machine controller to facilitate on-line monitoring and assessment of machined surfaces. Outcome from the monitoring and assessment task of is used to real-time control of the applied machining parameters by a developed decision making subsystem which automatically decides whether to keep or alter the employed machining parameters or to apply tool change. To facilitate the integration of computer vision with CNC machines a comprehensive system is developed to tackle a number of pinpointed issues that obstruct such integration including scene visibility issue (e.g. effects of coolant and cut chips as well as camera mounting and lighting), effects of machine vibration on the quality of obtained roughness measurement, selection of a the most proper roughness parameter to be employed, and assessment of machining parameters effects on acquired roughness measurement. Two system rigs employing different models of CNC machines are practically developed and employed in the conducted tests to beneficially generalize the findings. Two cameras are mounted on the machine spindle of each of the two employed CNC machines to provide valid image data according to the cutting direction. Proper selection and activation of relative camera is achieved automatically by the developed system which analyze the most recent conducted tool path movement to decide on which camera is to be activated. In order to assess the machining surface quality and cutting tool status, image data are processed to evaluate resulting tool imprints on the machined surface. An indicating parameter to assess resulting tool imprints is proposed and used. The overall results show the validity of the approach and encourage further development to realize wider scale applications of vision-based-CNC machines.
-
-
-
EEG - Mental Task Discrimination by Digital Signal Processing
More LessAbstractRecent advances in computer hardware and signal processing have made possible the use of EEG signals or “brain waves” for communication between humans and computers. Locked-in patients have now a way to communicate with the outside world, but even with the last modern techniques, such systems still suffer communication rates on the order of 2-3 tasks/minute. In addition, existing systems are not likely to be designed with flexibility in mind, leading to slow systems that are difficult to improve.
This Thesis is classifying different mental tasks through the use of the electroencephalogram (EEG). EEG signals from several subjects through channels (electrodes) have been studied during the performance of five mental tasks: Baseline task for which the subjects were asked to relax as much as possible, Multiplication task for which the subjects were given nontrivial multiplication problem without vocalizing or making any other movements, Letter composing task for which the subject were instructed to mentally compose a letter without vocalizing (imagine writing a letter to a friend in their head),Rotation task for which the subjects were asked to visualize a particular three-dimensional block figure being rotated about its axis, and Counting task for which the subjects were asked to imagine a blackboard and to visualize numbers being written on the board sequentially.
vvvThe work presented here maybe a part of a larger project, with a goal to classify EEG signals belonging to a varied set of mental activities in a real time Brain Computer Interface, in order to investigate the feasibility of using different mental tasks as a wide communication channel between people and computers.
-
-
-
Joint Hierarchical Modulation and Network Coding for Two Way Relay Network
Authors: Rizwan Ahmad, Mazen O. Hasna and Adnan Abu-DayyaAbstractCooperative communications has gained a lot of attention in the research community recently. This was possible due to the fact that the broadcast nature of wireless networks, which was earlier considered a drawback, can now be used to provide spatial diversity to increase throughput, reduce energy consumption and provide network resilience. The main drawback of cooperative communication is that it requires more bandwidth compared to traditional communication networks. Decode and Forward (DF) is one of the cooperative communications forwarding strategy where the relay nodes first decodes the data and then retransmit it to the destination. DF requires advanced techniques, which can be used at the intermediate relay nodes to improve spectrum utilization.
Some well-known techniques for spectrum efficiency are Network Coding (NC) and Hierarchical Modulation (HM). In NC technique, nodes in a network are capable of combining packets for a transmission, thus reducing number of transmissions. HM is a technique which allows transmission of multiple data streams simultaneously. Both HM and NC are useful techniques for spectral efficiency.
In this work, we evaluate the performance of a joint HM and NC scheme for two-way relay networks. The relaying is based on the signal-to-noise (SNR) threshold at relay. In particular, a two way cooperative network with two sources and one relay is considered as shown in fig 1. Two different protection classes are modulated by a hierarchical 4/16 - Quadrature Amplitude Modulation (QAM) constellation at the source. Based on the instantaneous received SNR at the relay, the relay decides to retransmit both classes by using a hierarchical 4/16 - QAM constellation, or the more-protection class by using a Quadrature Phase Shift Keying (QPSK) constellation, or remains silent. These thresholds at the relay give rise to multiple transmission scenarios in a two-way cooperative network. Simulation results are provided to verify the analysis.
-
-
-
Repairing Access Control Configurations via Answer Set Programming
Authors: Khaled Mohammed Khan and Jinwei HuAbstractAlthough various access control models have been proposed, access control configurations are error prone. There is no assurance of the correctness of access control configurations. When we find errors in an access control configuration, we take immediate actions to repair the configuration. The repairing is difficult, largely because arbitrary changes to the configuration may result in no less threat than errors do. In other words, constraints are placed on the repaired configuration. The presence of constraints makes a manual error-and-trial approach less attractive. There are two main shortcomings with the manual approach. Firstly, it is not clear whether the objectives are reachable at all; if not, we waste time trying to repair an error prone configuration. Secondly, we have no knowledge of the quality of the solution such as correctness of the repair.
In order to address these shortcomings, we aim to develop an automated approach to the repairing task of access control configurations. We have utilized answer set programming (ASP), a declarative knowledge representation paradigm, to support such an automated approach. The rich modeling language of ASP enables us to capture and express the repairing objectives and the constraints. In our approach, the repairing instance is translated into an ASP, and the ASP solvers are invoked to evaluate it.
Although the applications of ASP follow the general “encode-compute-extract” approach, they differ in the representations of the problems in ASP. In our case, there are two principal factors which render the proposed problem and approach non-trivial. Firstly, we need to identify constraints which are not only amendable to ASP interpretation, but also expressive enough to capture common idioms of security and business requirements; there is a trade-off to make. Secondly, our ASP programs should model the quality measure of repairs—when more than one repair is possible, the reported one is optimized in terms of the quality measure. We have also undertaken extensive experiments on both real-world and synthetic data-sets. The experiment results validate the effectiveness and efficiency of the automated approach.
-
-
-
A Security Profile-based Assurance Framework for Service Software
More LessAbstractA service software is a self-contained, modular application deployed over standard computing platforms, and readily accessible by users within or across organizational boundaries using Internet. For businesses to open up their applications for the interaction with other service software, a fundamental requirement is that there has to be sufficient choices for security provisions allowing service consumers to select and verify the actual security assurances of services. In this context, the specific research challenge is how we could design service software focusing on the consumer's specific security requirements, and provide assurances to those security needs. Clearly, the security requirements vary from consumers to consumers. This work outlines a framework focusing on the selection of service software consistent with the security requirements of consumers, and compatibility checking of the assurances provided by services. We use profile-based compatibility analysis techniques to form an essential building block towards assuring security of service software.
In our research, we envision a security profile based compatibility checking that focuses more on automatic analysis of security compatibility using formal analysis techniques of security properties of software services. Our approach is based on three main building blocks: reflection of security assurances; selection of preferred assurances; and checking of security compatibility. Clearly, our vision and research for service security based on profile based compatibility analysis will form an essential building block towards realizing the full potential of service oriented computing. We foresee that the provision of the proposed scheme for service security profiling and compatibility analysis will significantly advance the state of practice in service oriented computing. At the same time, its development represents a new and highly challenging research target in the area.
This work is of great significance to the development of future software systems that facilitate security-aware cross-organizational business activities. The envisioned capability to integrate service software across-organizational boundaries that meets security requirements of all parties involved represents a significant technological advance in enabling practical business-to-business computing, leading to new business opportunities. At the same time, the approach will make significant scientific advancement in understanding the problem of application-level system security in a service oriented computing context.
-
-
-
Hierarchical Clustering for Keyphrase Extraction from Arabic Documents Based on Word Context
Authors: Rehab Duwairi, Fadila Berzou and Souad MecheterAbstractKeyphrase extraction is a process by which the set of words or phrases that best describe a document is specified. The phrases could be extracted from the document words itself, or they could be external and specified from an ontology for a given domain. Extracting keyphrases from documents is critical for many applications such as information retrieval, document summarization or clustering. Many keyphrase extractors view the problem as a classification problem and therefore they need training documents (i.e. documents which their keyphrases are known in advance). Other systems view keyphrase extraction as a ranking problem. In the latter approach, the words or phrases of a document are ranked based on their importance and phrases with high importance (usually located at the beginning of the list) are recommended as possible keyphrases for a document.
This abstract explains Shihab; a system for extracting keyphrases from Arabic documents. Shihab views keyphrase extraction as a ranking problem. The list of keyphrases is generated by clustering the phrases of a document. Phrases are built from words which appear in the document. These phrases consist of 1-, 2- or 3-words. The idea is to group phrases which are similar into one cluster. The similarity between phrases is determined by calculating the Dice value of their corresponding contexts. A phrase context is the sentence in which that phrase appears. Agglomerative hierarchical clustering is used in the clustering phase. Once the clusters are ready, then each cluster will nominate a phrase to the set of candidate keyphrases. This phrase is called cluster representative and is determined according to a set of heuristics. Shihab results were compared with other existing keyphrase extractors such as KP-Miner and Arabic-KEA and the results were encouraging.
-
-
-
On the Design of Learning Games and Puzzles for Children with Intellectual Disability
Authors: Aws Yousif Fida El-Din and Jihad Mohamed AljaamAbstractThe objective of this paper is to present the edutainment learning games that we are developing for Qatari children with moderate intellectual disability. These games will help them to learn effectively in funny and enjoyable ways. We use multimedia technology merged with intelligent algorithms to guide the children in play. As the number of children with intellectual disability is increasing, an early intervention to teach them properly using information technology is very important. Few research projects on disability are being conducted in the Arab world however, and these projects are still not enough to respond to the real needs of the disabled and achieve satisfactory outcomes. Developing edutainment games for children with intellectual disability is a very challenging task. First, it requires content developed by specialized instructors. Second, the interface design of the games must be presented clearly and be easy to interact with. Third, the games must run slowly, in order to give the children some time to think and interact. Fourth, regardless of the results, the game must allow a minimum level of general satisfaction, to avoid depressing the children. Fifth, the game must make maximum use of multimedia elements to draw the attention of the children. We show some multimedia applications for children with different disabilities, which were developed in Qatar University (enhancing mathematics skills with symbolic gift reward in case of guessing the answer). This feature motivated the children to play the game several times in the day. The applications also used some videos to illustrate the game before they play it. The purpose of the second multimedia application is to test the children's memory. The application uses different multimedia elements to present different games, which requires deep concentration in order to guess the answer. These games helped the children develop a strong sense of self-confidence. Learning puzzles that we have developed were based on intelligent algorithms to avoid cycling and which allow the children to reach a solution. Two different approaches were used: Simulated Annealing and Tabu Search.
-
-
-
Minimal Generators Based Algorithm for Text Features Extraction: A More Efficient and Large Scale Approach
Authors: Samir Elloumi, Fethi Fejani, Sadok Ben Yahia and Ali JaouaAbstractIn the recent years, several mathematical concepts were successfully explored in computer science domain, as basis for finding original solutions for complex problems related to knowledge engineering, data mining, information retrieval, etc.
Thus, Relational Algebra (RA) and Formal Concept Analysis (FCA) may be considered as useful mathematical foundations that unified data and knowledge in information retrieval systems. As for example, some elements in a fringe relation (related to the RA domain) called isolated points were successfully of use in FCA as formal concept labels or composed labels. Once associated to words, in a textual document, these labels constitute relevant features of a text. Here, we propose the GenCoverage algorithm for covering a Formal Context (as a formal representation of a text) based on isolated labels and we use these labels (or text features) for categorization, corpus structuring and micro-macro browsing as an advanced functionality in the information retrieval task.
The main thrust of the introduced approach heavily relies on the snugness connection between isolated points and minimal generators (MGs). MGs stand at the antipodes of the closures within their respective equivalence classes. Relying on the fact the minimal generators are the smallest elements within an equivalence class, so their detection/traversal is largely eased and permits a swift building of the coverage. Thorough carried out experiments provide empirical evidences about the performances of our approach.
-
-
-
Preserving Privacy from Unsafe Data Correlation
Authors: Bechara Al Bouna, Christopher Clifton and Qutaibah MalluhiAbstractWith the emergence of cloud computing, providing safe data outsourcing has become an active topic. Several regulations have been issued to foresee that individual and corporate information would be kept private in a cloud computing environment. To guarantee that these regulations are fully maintained, the research community proposed new privacy constraints such as k-anonymity, l-diversity and t-closeness. These constraints are based on generalization which, transforms identifying attribute values into a general form and partitions to eliminate possible linking attacks. Despite their efficiency, generalization techniques affect severely the quality of outsourced data and their correlation. To cope with such defects, Anatomy has been proposed. Anatomy releases quasi-identifiers values and sensitive values into separate tables which, essentially preserve privacy and at the same time capture large amount of data correlation. However, there are situations where data correlation could lead to an unintended leak of information. In this example, if an adversary knows that patient Roan (P1) takes a regular drug, the join of Prescription (QIT) and Prescription (SNT) on the attribute GID leads to the association of RetinoicAcid to patient P1 due to correlation.
In this paper, we present a study to counter privacy violation due to data correlation and at the same time improve aggregate analysis. We show that privacy requirements affect table decomposition based on what we call correlation dependencies. We propose a safe grouping principle to ensure that correlated values are grouped together in unique partitions that obey to l-diversity and at the same time preserve the correlation. An optimization strategy is designed as well to reduce the number of anonymized tuples. Finally, we extended the UTD Anonymization Toolbox to implement the proposed algorithm and demonstrate its efficiency.
-
-
-
News Alerts Trigger System to Support Business Owners
Authors: Jihad Mohamad Aljaam, Khaled Sohil Alsaeed and Ali Mohamad JaouaAbstractThe exponential growth of financial news coming from different sources makes getting effective benefit from them very hard. Business decision makers who reply to these news, are unable to follow them accurately in real time. They need always to be alerted immediately for any potential financial events that may affect their businesses whenever they occur. Many important news can simply be embedded into thousands of lines of financial data and cannot be detected easily. Such kind of news may have sometimes a major impact on businesses and the key decision makers should be alerted about them. In this work, we propose an alert system that screens structured financial news and trigger alerts based on the users profiles. These alerts have different priority levels: low, medium and high. Whenever the alert priority level is high, a quick intervention should be taken to avoid potential risks on businesses. Such events are considered as tasks and should be treated immediately. Matching users profiles with news events can sometimes be straightforward. It can also be challenging especially whenever the keywords in the users profiles are just synonyms of the events keywords. In addition, alerts can sometimes be dependable on the combination of correlated news events coming from different sources of information. This makes their detection a computationally challenging problem. Our system allows the user to define their profiles in three different ways: (1) selecting from a list of keywords that are related to events properties; (2) providing free keywords; and (3) entering simple short sentences. The system triggers alerts immediately whenever some news events related to the users profiles occur. It takes into consideration the correlated events and the concordance of the events keywords with the synonymous of the users profiles keywords. The system uses the vector space model to match keywords with the news events words. As consequence, the rate of false-positive alerts is low whereas the rate of false-negative alerts is relatively high. However, enriching the dictionary of synonym words would reduce the false-negative alerts rate.
-
-
-
Assistive Technology for People with Hearing and Speaking Disabilities
AbstractThe community with Hearing or Speaking Disabilities represents a significant component of the society that needs to be well integrated in order to foster great advancements through leveraging all contributions of every member in the society. When those people cannot read lips they usually need interpreters to help them communicate with people who do not know sign language, and they also need an interpreter when they use phones, because the communication will not be done easily if they are not using special aiding devices, like a Relay Service or Instant Messaging (IM). As the number of people with hearing and speaking disabilities is increasing significantly; building bridges of communications between deaf and hearing community is essential, to deepen the mutual cooperation in all aspects of life. The problem could be summarized in one question: How to construct this bridge to allow people with hearing and speaking difficulties communicate?
This project suggests an innovative framework that contributes to the efficient integration of people with hearing disabilities with the society by using wireless communication and mobile technology. This project is completely operator independent unlike the existing solutions (Captel and Relay Service), it depends on an extremely powerful Automatic Speech Recognition and Processing Server (ASRPS) that can process speech and transform it into text. Independent means, it recognizes the voice regardless of the speaker and the characteristics of his/ her voice. On the other side there will be a Text To Speech (TTS) engine, which will take the text sent to the application server and transmit it as speech. The second aim of the project is to develop an iPhone/iPad application for the hearing impaired. The application facilitates the reading of the received text by converting it into sign language animations, which are constructed from a database; we are currently using American Sign Language for its simplicity. Nevertheless, the application can be further developed in other languages like Arabic sign language and British sign language. The application also assists the writing process by developing a customized user interface for the deaf to communicate efficiently with others that includes a customized keyboard.
-
-
-
Using Cognitive Dimensions Framework to Evaluate Constraint Diagrams
Authors: Noora Fetais and Peter ChengAbstractThe Cognitive Dimensions of Notations are a heuristic framework created by Thomas Green for analysing the usability of notational systems. Microsoft used this framework as a vocabulary for evaluating the usability of their C# and .NET development tools. In this research we used this framework to compare the evaluation of the Constraint Diagrams and the evaluation of the Natural Language by running a usability study. The result of this study will help in determining if users would be able to use constraint diagrams to accomplish a set of tasks. From this study we can predicate difficulties that may be faced when working on these tasks. Two steps were required. The first step is to decide what generic activities a system is desired to support. An Activity is described at a rather abstract level in terms of the structure of information and constraints on the notational environment. Cognitive dimensions constitute method to theoretically evaluate the usability of a system. Its dimensional checklist approach is used to improve different aspects of the system. Each improvement will be associated with a trade-off cost on other aspects. Each generic activity has its own requirements in terms of cognitive dimensions, so the second step is to scrutinize the system and determine how it lies on each dimension. If the two profiles match, all is well. Every dimension should be described with illustrative examples, case studies, and associated advice for designers. In general, an activity such as exploratory design where software designers make changes at different levels is the most demanding activity. This means that dimensions such as viscosity and premature-commitment must be low while visibility and role-expressiveness must be high.
-
-
-
IT System for Improving Capability and Motivation of Workers: An Applied Study with Reference to Qatar
Authors: Hussein Alsalemi and Pam MayhewAbstractInformation Systems (IS) is a discipline that has its roots in the field of organizational development. Information Technology (IT) is a primary tool that has been used by IS to support the aim of developing organisations. However, IT has focused on supporting two main areas in organisations to help them become better at achieving their goals.These two areas are: operational and strategic. Little research has been devised to support the Human (workforce) for the aim of improving organization’s abilities to achieve their goals. In IS the Socio-Technical Theory is one theory that researches approaches to improve employees' satisfaction for the sake of better work productivity and business value. This theory is based on the idea of achieving harmonious integration of different subsystems in an organization (social, technical and environmental subsystems).
The aim of this research is to find out if IT can be used to improve the capability and motivation of the workforce in organisations so that these organizations can better achieve their goals.
This research is an applied study with reference to the Qatar National Vision 2030(QNV2030) Human Pillar. Using grounded theory (GTH) research methodology, the research characterized the main factors that affect capability and motivation of the Qatari workforce. These findings were used to develop the theoretical model. This model identifies core factors, gives a description of each factor and explains interactions between them. The theoretical model was then transformed into an IT system consisting of a number of IT tools, which was then tested in different organizations in Qatar. The test was to evaluate its effectiveness in improving the capabilities and motivation of Qatari workforce and to explore its effectiveness in helping these organizations better achieve their goals.
The research concluded that the developed IT system based on the theoretical model can help in improving motivation and capability of a workforce providing a defined set of organizational and leadership qualities exist within the organisation.
-
-
-
Utilization of Mixtures as Working Fluids in Organic Rankine Cycle
Authors: Mirko Stijepovic, Patrick Linke, Hania Albatrni, Rym Kanes, Umaira Nisa, Huda Hamza, Ishrath Shiraz and Sally El MeragawiAbstractOver the past several years ORC processes have become very promising for power production from low grade heat sources: solar, biomass, geothermal and waste heat. The key challenge in the design process is the selection of an appropriate working fluid. A large number of authors used pure components as working fluid, and assess ORC performance.
ORC systems that use single working fluid component have two major shortcomings. First, the majority of applications involve temperatures of the heat sink and source fluid varying during the heat transfer process, whereas the temperatures of the working fluid during evaporation and condensing remains constant. As a consequence a pinch point is encountered in the evaporator and condenser, giving rise to large temperature differences at one end of the heat exchanger. This leads to irreversibility that in turn reduces process efficiency. A similar situation is also encountered in the condenser. A second shortcoming of the Rankine cycle is lack of flexibility.
These shortcomings result from a mismatch between thermodynamic properties of pure working fluids, the requirements imposed by the Rankine cycle and the particular application. In contrast, when working fluid mixtures are used instead of single component working fluids, improvements can be obtained in two ways, through inherent properties of the mixture itself, and through cycle variations which, become available with mixtures. The most obvious positive effect is decrease in energy destruction, since the occurrence of a temperature glide during a phase change provides a good match of temperature profiles in the condenser and evaporator.
This paper presents detailed simulations and economic analyses of Organic Rankine Cycle processes for energy conversion of low heat sources. The paper explores the effect of mixture utilization on common ORC performance assessment criteria in order to demonstrate advantages of employing mixtures as working fluid as compared to pure fluids. We illustrate these effects based on of zeotropic mixtures of paraffins, as ORC working fluids.
-
-
-
Application of Nanotechnology in Hybrid Solar Cells
Authors: Narendra Kumar Agnihotra and S SakthivelAbstractPlastic/organic /polymer photovoltaic solar cells are fourth generation cells however the efficiency, thermal stability and cost of fourth generation solar cells are still not sufficient to replace conventional solar cells. Hybrid solar cells have been one of the alternate technologies to harness solar power into electrical power to overcome the high cost of conventional solar cells. This review paper has focused on the concept of hybrid solar cells with the combination of organic/polymer materials, blended with inorganic semiconducting materials. The paper presents the importance of nanoscale materials and its shape and size, nanotubes, nanowire, nanocrystal, which can increase the efficiency of the solar cells. The study shows that nanomaterials have immense potential and application of nanomaterials (inorganic/organic/polymer) can improve the performance of photovoltaic solar cells. Tuning of nanomaterials increase the functionality, band gap, optical absorption and shape of the materials, in multiple orders compared to micro scale materials. Hybrid solar cells have unique properties of inorganic semiconductors along with the film forming properties of conjugated polymers. Hybrid materials have great potential because of their unique properties and are showing great results at the preliminary stages of research. The advantage of organic/polymer is easy processing; roll to roll production, lighter weight, flexible shape and size of the solar cells. Application of nanotechnology in hybrid solar cells has opened the door to manufacturing of a new class of high performance devices.
-
-
-
Optimized Energy Efficient Content Distribution Over Wireless Networks with Mobile-to-Mobile Cooperation
Authors: Elias Yaacoub, Fethi Filali and Adnan Abu-DayyaAbstractMajor challenges towards the development of next generation 4G wireless networks include fulfilling the foreseeable increase in power demand of future mobile terminals (MTs) in addition to meeting the high throughput and low latency requirements of emerging multimedia services. Studies show that the high-energy consumption of battery operated MTs will be one of the main limiting factors for future wireless communication systems. Emerging multimedia applications require the MTs’ wireless interfaces to be active for long periods while downloading large data sizes. This leads to draining the power of the batteries.
The evolution of MTs with multiple wireless interfaces helps to deal with this problem. This results in a heterogeneous network architecture with MTs that actively use two wireless interfaces: one to communicate with the base station (BS) or access point over a long-range (LR) wireless technology (e.g., UMTS/HSPA, WiMAX, or LTE) and one to communicate with other MTs over a short-range (SR) wireless technology (e.g., Bluetooth or WLAN). Cooperative wireless networks proved to have a lot of advantages in terms of increasing the network throughput, decreasing the file download time, and decreasing energy consumption at MTs due to the use of SR mobile-to-mobile collaboration (M2M). However, the studies in the literature apply only to specific wireless technologies in specific scenarios and do not investigate optimal strategies.
In this work, we consider energy minimization in content distribution with M2M collaboration and derive the optimal solution in a general setup with different wireless technologies on the LR and SR. Scenarios with multicasting and unicasting are investigated. Content distribution delay is also analyzed. Practical implementation aspects of the cooperative techniques are studied and different methods are proposed to overcome the practical limitations of the optimal solution. Simulation results with different technologies on the LR and SR are generated, showing significant superiority of the proposed techniques. Ongoing work is focusing on incorporating quality of service constraints in the energy minimization problem and in designing a testbed validating the proposed methods.
-
-
-
Design of Novel Gas-to-Liquid Reactor Technology Utilizing Fundamental and Applied Research Approaches
Authors: Nimir Elbashir, Aswani Mogalicherla and Elfatih ElmalikAbstractThis research work is in line with the State of Qatar's aspiration on becoming the “gas capital of the world”, as it focuses on developing cost effective Gas-to- Liquid (GTL) technologies via Fischer-Tropsch synthesis (FTS). The objective of our present research activities is developing a novel approach to the FTS reactor design through controlling the thermo-physical characteristics of the reaction media by the introduction of a supercritical solvent.
The research is facilitated by QNRF through the flagship National Priorities Research Program, highlighting the importance of the subject matter to Qatar. It is a multidisciplinary consortium comprising of highly specialized teams of foremost scientists in their fields from four universities.
FTS is the focal process in which natural gas is converted to ultra-clean liquid based fuels; it is a highly complex chemical reaction where synthesis gas (a mixture of Hydrogen & Carbon Monoxide) enters the reactor and propagates to various hydrocarbons over a metallic based catalyst. Many factors impede the current commercial FTS technologies, chiefly transport and thermal limitations due to the nature of the phase of operation (either liquid or gas phase classically). Interestingly, the most advanced FTS technologies that employ either the liquid phase or gas phase are currently in operation in Qatar (Shell's Pearl project and Sasol's Oryx GTL plant).
This project is concerned with the design of an FTS reactor to be operated under supercritical fluid conditions in order to leverage certain advantages over the aforementioned commercial technologies. The conception of designing this novel reactor is based on phase behavior and kinetic studies of the non-ideal SCF media in addition to a series of process integration and optimization studies coupled with the development of sophisticated dynamic control systems. These results are currently under use at TAMUQ to build a bench-scale reactor to verify simulation studies.
To date, our collective research has yielded 8 peer-reviewed publications, more than 8 conference papers and proceedings, as well as numerous presentations in international conferences. It is noteworthy to mention that an advisory board composed of experts from the world leading energy companies follows the progress of this project toward its ultimate goal.
-
-
-
Hierarchical Cellular Structures with Tailorable Proparties
Authors: Abdel Magid Hamouda, Amin Ajdari, Babak Haghpanah Jahromi and Ashkan VaziriAbstractHierarchical structures are found in many natural and man-made materials [1]. This structural hierarchy play an important role in determining the overall mechanical behavior of the structure. It has been suggested that increasing the hierarchical level of a structure will result in a better performing structure [2]. Besides, honeycombs are well known structures for lightweight and high strength applications [3]. In this work, we have studied the mechanical properties of honeycombs with hierarchical organization using theoretical, numerical, and experimental methods. The hierarchical organization is made by replacing the edges of a regular honeycomb structure with smaller regular honeycomb. Our results showed that honeycombs with structural hierarchy have superior properties compared to regular honeycombs. The results show that a relatively broad range of elastic properties, and thus behavior, can be achieved by tailoring the structural organization of hierarchical honeycombs, and more specifically the two dimension ratios. Increasing the level of hierarchy provides a wider range of achievable properties. Further optimization should be possible by also varying the thickness of the hierarchically introduced cell walls, and thus the relative distribution of the mass, between different hierarchy levels. These hierarchical honeycombs can be used in development of novel lightweight multifunctional structures, for example as the cores of sandwich panels, or development of lightweight deployable energy systems.
-