Virtualized environments in cloud can have Superlinear speedup
Sasko Ristov, Marjan Gusev, Magdalena Kostoska and Kiril Kjiroski
CPU cache is used to speedup the execution of memory intensive algorithms. Usage of greater cache memory sizes reduces the cache misses and overall execution time. This paper addresses architectures in modern processors realized as multi chip and multi core processors with shared L3 cache and dedicated L2 and L1 cache. The goal is to analyze the behavior of servers and to check the hypothesis if the virtual environments are usually slower than standard environments. Although majority will think that adding a new software and operating system layers just slows down the software, there is also a contra-argument based on cache size exploitation, since a multiprocessor with dedicated cache per core usually requires smaller cache size and can exploit the benefits of bigger cache memory than single processor. This might also lead to superlinear speedup greater than the number of processing elements. The testing methodology and experiments for this research are applied also to cloud environment. The results show that cloud environment can also achieve superlinear speedup for execution of cache intensive algorithms when high performance computing is used in virtual machines allocated with more than one processing element (core).
Comparative Performance Evaluation Of The AVL And Red-Black Trees
Svetlana Strbac-Savic, Milo Tomasevic
The AVL and red-black trees are the suboptimal variants of the binary search trees which can achieve the logarithmic performance of the search operation withot an excessive cost of the optimal balancing. After presenting a brief theoretical background, the paper comparatively evaluates the performance of these two structures. The evaluation was performed by means of simulation with a synthetic workload model. In order to obtain a better insight, the performance indicators are chosen to be implementation and platform independent. Some representative results of the evalution are given and discussed. Finally, the findings of this study are summarized into the suggestions for an optimal use of the analyzed trees.
OECEP: Enriching Complex Event Processing with Domain Knowledge from Ontologies
Sebastian Binnewies, Bela Stantic
With the increasing adoption of an event-based perspective in many organizations, the demands for automatic processing of events are becoming more sophisticated. Although complex event processing systems can process events in near real-time, these systems rely heavily upon human domain experts. This becomes an issue in application areas that are rich in specialized domain knowledge and background information, such as clinical environments. We utilize a framework of four techniques to enhance complex event processing with domain knowledge from ontologies to address this issue. We realize this framework in our novel approach of ontologysupported complex event processing, which stands in contrast to related approaches and emphasizes the strengths of current advances in the individual fields of complex event processing and ontologies. Experimental results from the implementation of our approach based on a state-of-the-art system show its feasibility and indicate the direction for future research.
Predictive Complex Event Processing: A conceptual framework for combining Complex Event Processing and Predictive Analytics
Lajos Jeno Fülöp, Gabriella Tóth, László Vidács, Árpád Beszédes, Hunor Demeter, Lóránt Farkas
Complex Event Processing deals with the detection of complex events based on rules and patterns defined by domain experts. Many complex events require real-time detection in order to have enough time for appropriate reactions. However, there are several events (e.g. credit card fraud) that should be prevented proactively before they occur, not just responded after they happened. In this paper, we briefly describe Complex Event Processing (CEP) and Predictive Analytics (PA). Afterwards, we focus on a major future direction of CEP, namely the inclusion of PA technologies into CEP tools and applications. Involving PA opens a wide range of possibilities in several application fields. However, we have observed that only few solutions apply PA techniques. In this paper, we define a conceptual framework which combines CEP and PA and which can be the basis of generic design pattern in the future. The conceptual framework is demonstrated in a proof–of–concept experiment. Finally we provide the results and lessons learned.
Neuroevolution based Multi-Agent System for Micromanagement in Real-Time Strategy Games
Iuhasz Gabriel, Viorel Negru, Daniela Zaharie
The main goal of this paper is the design of a multi-agent system (MAS) that handles unit micromanagement in real time strategy games and is able to adapt/learn during game play. To achieve this we adopted the rtNEAT approach in order to obtain customized neural network topologies, thus avoiding the generation of too complex architectures. Also by defining internal and external inputs for each agent we managed to create independent agents that are able to cooperate and form teams for their mutual benefit and at the same time eliminate unnecessary communication overhead. The MAS was implemented for the real time strategy game StarCraft using JADE multi-agent platform and BWAPI to ensure the interface with the game. We used as a baseline the in game AI and also tested it against other adapting AI systems in order to compare their performance against our system.
Formal Modelling of Agents acting under Artificial Emotions
Petros Kefalas, Ioanna Stamatopoulou and Dionysis Basakos
Artificial Agents infused with emotions have attracted considerable attention in recent years. Many domain areas require agents to be able to demonstrate an emotional reaction to stimuli, beliefs, goals, communication etc., thus exhibiting a believable behaviour. On the other hand, little has been said on how formal methods could integrate emotions in a form of rigorous mathematical notation. This paper is an initial attempt to infuse a formal modelling method for reactive agents, namely X-machines, with appropriate attributes that model artificial emotions. X-machines are finite statebased models extended with memory as well as functions that are applied on input and memory values. X-machines have been shown to be particularly useful to model reactive agents. The computation, that is the overall behaviour of an agents, is a sequence of states reached through the application of transition functions (actions that an agent performs). This computation is amended when artificial emotions are involved, thus leading to a different overall behaviour when an agent acts under artificial emotions. After discussing basic theories on emotions and their role to creating believable agents, we present the definition of the e X-machine (eX) and its computation. A simple multi-agent system with emotional reactive agents is used to show the formal modelling process. Additionally, the same example is used to demonstrate how a visual simulation, based on eX formal modelling, has different behaviour of the overall system, when artificial emotions are applied.
A Distributed Asynchronous and Privacy Preserving Neural Network Ensemble Selection Approach for Peer-to-peer Data Mining
Yiannis Kokkinos, Konstantinos Margaritis
This work describes a fully asynchronous and privacy preserving ensemble selection approach for distributed data mining in peer-to-peer applications. The algorithm builds a global ensemble model over large amounts of data distributed over the peers in a network, without moving the data itself, and with little centralized coordination. Only classifiers are transmitted to other peers. Here the test set from one classifier is the train set of the other and vice versa. Regularization Networks are used as ensemble member classifiers. The approach constructs a mapping of all ensemble members to a mutual affinity matrix based on classification rates between them. After the mapping of all members the Affinity Propagation clustering algorithm is used for the selection phase. A classical asynchronous peer-to-peer cycle is continually executed for computing the mutual affinity matrix. The cycle composed of typical grid commands, like send local classifier to a peer k, check for received classifier m in the queue, compute local average positive hits, send results to peer m and send local classifier to a peer k+1. Thus the communication model used is simple point-to-point with send-receive commands to or from a single peer. The approach can also be implemented to other types of classifiers.
Recognition and Normalization of Some Classes of Named Entities in Serbian
Cvetana Krstev, Jelena Jacimovic and Dusko Vitas
In this paper we present a system for recognition and normalization of measurement and money expressions and temporal expressions for dates and time in Serbian newspaper texts. The normalization of amount expressions involves a transformation of used numerals to a fixed-point notation as well as a transformation of currencies and measurement units into their standard or common abbreviations, while temporal expressions are transformed into the TimeML format. For this purpose, we use our general lexical resources and develop some new ones. The system itself consists of a large collection of finite-state transducers. Finally, we give some evaluation data that show that our system performs well, with well-balanced precision and recall.
Time Series Mining in a Psychological Domain
Vladimir Kurbalija, Hans-Dieter Burkhard, Mirjana Ivanovic, Charlotte von Bernstorff, Jens Nachtwei and Lidija Fodor
Analysis of time-series became an inevitable tool in applied settings, such as stock market analysis, process and quality control, observation of natural phenomena, medical treatments, and in the behavioral science, such as psychological research. In this paper, we utilize a new kind of a tool set for time-series analysis (FAP, developed at Department of Mathematics and Informatics, University of Novi Sad) on behavioral data gained from a specific experimental lab system, a so called Socially Augmented Microworld with three human participants (developed by informatics and psychologists for Human Factors Research at Humboldt University Berlin). On the basis of these data (logfiles) we extracted three types of time-series and generated distance matrices using three kinds of time-series similarity measures. Finally, the clustering of generated distance matrices produced dendrograms which serve as the basis for a deeper analysis of human behavior. The outcome of this analysis is two-fold: (a) it allows to select the most suitable similarity measure for this domain of experimental research and (b) these results can serve as a basis for the development of artificial agents, which may in turn replace the human participants in the experiment.
An Approach to Automated Reparation of Failed Proof Attempts in Propositional Linear Logic Sequent Calculus
Tatjana Lutovac
In many application areas of automated reasoning it is not sufficient to show that a given assertion is true. A good reasoning system should include tools not only for the generation of proofs, but also for the analysis of and manipulations with unsuccessful proof attempts. This paper focuses on the development of automated techniques for the transformation of a finite, failed sequent proof attempt D (in one-sided propositional linear logic) into a complete proof which retains the order of sequent rules of D and whose conclusion contains the conclusion of D. An algorithm is developed which replaces each non-axiom leaf ⊢ Δ with a proof of ⊢ Δ, F where F is formula dependent on Δ
A Knowledge Based Approach for Handling Supply Chain Risk Management
Adrian Solomon, Panayiotis Ketikidis and Alok Choudhary
This paper discusses the concept of supply chain risk management (SCRM) in relation to the emerging challenges brought by globalisation and information and communication technologies (ICT) and the ability of SCRM frameworks to adapt to these latest requirements. As SCRM can be responsible for loss or gain of profit, the ultimate goal of enterprises is to have resilient supply chains with automated decision making that can deal with potential disruptions. In response to these, taking advantage of ICT developments such as knowledge and data discovery techniques and automated risk management frameworks have become a vital aspect for assuring business success. Having this context, this research has the following aims: 1) to perform literature review on identifying and categorising several types of supply chain risks in order to analyze their management strategies, 2) to perform a literature review on knowledge management frameworks and 3) to propose a knowledge management and a risk management framework that would be, at a further stage of this research, integrated in an agent based decision support system for supply chain risk management.
Social Networking Software use in Social Media Marketing vs. Traditional Marketing: Case Study in Macedonia
Bekim Fetaji, Amir Demiri
The focus of this research is analyzing the possibility of applying social networking software for social media marketing and its analyses compared the traditional marketing and analyzed is a case study of marketing approaches for small businesses in Macedonia. The contribution of the research study is the proposed approach with the use of social network software for social media marketing. The study also outlines research strategy, sampling methods, questionnaire and data collection method. Lastly it presents the validity and reliability measurements used in this study. Findings have been stated and recommendations are provided.
A Software Tool for Building a Statistical Prefix Processor
Nikitas Karanikolas, Michael Vassilakopoulos and Nektarios Giannoulis
Information Retrieval or Text Classification need to match words between the user’s input and the documents in a collection of texts. Matching of words is not a trivial process since words have grammatical (inflectional and derivational) variations. There are two main approaches for matching between inflected words: Stemming (removing word suffixes based on ad-hoc selected suffixes) and Lemmatizing (replacing the inflected form with the base form of a word). However, these approaches normalize the word variations in their rightmost side. We claim it will be beneficial to additionally concentrate on word normalization at the left side, by removing word prefixes. In this report, we present the architecture and functioning of a software tool that can be used as the first stage of a Statistical Prefix Processor, a system that could effectively remove prefixes from words and act as a preprocessing stage of text analysis applications. The tool we present is comprised of two stages / subtools. During the first stage, possible prefixes of words within a collection of texts are identified. During the second stage, a number of users (native speakers) process the text collection, automatically locate words that contain each stem and characterize the prefixes used with each stemmed word. After the text collection has been processed by all users, statistical conclusions can be drawn for each stemmed word and its associated prefixes.
Management, Communications and Security Policy in Mobile Database Systems
Vigan Raça, Betim Cico and Majlinda Fetaji
This paper presents a typical “Net-Centric” system which is based on three main scenarios as the concept of management of data, communication and security policies, all of these based on mobile system. The paper is projected to relate these elements, presented in a mobile application which is very convenient for the commercial market by offering a good solution which in addition to facilitating the work it also reduces the two valuable components as cost and time. The mobile devices on which is installed application may be PDA or Smart Phone, which synchronized with the server and cause a number of different transactions from client to server direction, using different protocols and communications standards which are presented and performed in mobile application. Further the report will identify the key elements that should be met to establish service in the market, and will also checks relations between business and these elements. Three scenarios that mentioned above are researched and studied and may observe how the elements can changed, depending on the reason how the service is performed.
An Approach to the Specification of User Interface Templates for Business Applications
Sonja Ristic, Ivan Lukovic, Slavica Aleksic, Jelena Banovic and Ali Al-Dahoud
Through a number of research projects we propose a form-driven approach to business application generation. Our IIS*Studio development environment (IIS*Studio DE, current version 7.1) is aimed to support the form-driven approach and provides the information system (IS) design and generating executable business application prototypes. An executable business application specification, generated by means of IIS*Studio, may be visually interpreted in different ways. In the paper we present the extension of the IIS*Studio repository containing the common model of user interface (UI). The IIS*UIModeler is an integrated part of the IIS*Studio development environment, aimed at modelling UI templates. Applying it, a designer specifies UI templates. UI template specification contains attribute values that describe common UI characteristics, such as: screen size, main application window position, background/foreground colour, etc. UI template specifications are independent from any specific IS project specification, generated by means of IIS*Studio tool. The same UI template may be used for the business application prototype generation of different ISs. Also, the same IS project specification may be visually interpreted in different ways by means of different UI templates. The specification of a UI template may be seen as a fully platform independent UI model. Besides the detail description of UI template common model, we illustrate the main features of the IIS* UIModeler tool.
Analyzing the Selection and Dynamic Composition of Web Services in E-commerce Transactions
Nikolaos Vesyropoulos, Christos K. Georgiadis and Christos Ilioudis
Over the past few years Web Services (WS) have revolutionized the way loosely coupled distributed systems communicate and interact online. The aforementioned success has led to an abundance of available WS, which makes it harder for users and businesses to discover the appropriate services to be used as standalones or as part of a domain-specific service composition. Semantics and Ontologies may certainly provide invaluable solutions to facilitate the discovery process. In addition, Quality of Service (QoS) characteristics may also be taken into consideration towards optimizing service compositions. In this paper we firstly attempt to stress the importance of properly discovering and selecting WS by reviewing recent research results and secondly to analyze and identify the current discrete dynamic service composition approaches. Our interest is both for QoS-aware service compositions (system level), and for Business-driven automated compositions (business level). We highlight the advantages, the methods and techniques involved and the challenges of each approach. Finally, we analyze their influence on designing and implementing interoperable e-commerce transactions as solutions that exploit dynamic composition scenarios.
International Educational Cooperation – One Possible Model
Klaus Bothe, Zoran Putnik and Betim Cico
In this paper, an example of successful cooperation in the field of education is presented. As a part of an international educational project, comprising nine countries, and fifteen universities, a "crash-course" on "Software Engineering" has been conducted at the Polytechnic University of Tirana, by a professor from Germany and assistant from Serbia, with the unselfish help from the local professor from Albania. After the fifth conduction of a course, in this paper we present our experiences, share the knowledge gained, depict the difficulties we encountered, and describe the satisfaction we come upon.
Modeling the Characteristics of a Learning Object for Use within e-Learning Applications
George Nikolopoulos, Georgia Solomou, Christos Pierrakeas and Achilles Kameas
Educational content plays a significant role in the process of delivering knowledge, that’s why it needs to be designed carefully, following designated principles. Learning Objects (LOs) constitute a novel approach in the educational content’s organization, bearing features that if effectively used could lead to enhanced e-learning services. What is missing from literature, though, is a common agreement about the LO’s attributes and structure. For this reason, we initially try to specify the main characteristics of a LO and determine its functionality, especially in the context of distance education. Having realized its fundamental role in the instructional design process, we make explicit its correlation with educational objectives and other aspects of learning. Finally, in an attempt to capture all LO’s characteristics and make them utilizable by e-learning applications, we propose a metadata schema, reflecting all features of a LO, as described in this work.
Programming Techniques and Environments in a Technology Management Department
Stelios Xinogalos
Teaching and learning programming is widely known to be quite problematic. Designing and deploying programming courses is also quite complex. Several choices have to made, such as selecting the first programming technique and language, the sequence of programming techniques presented to students, the programming environments and the teaching approaches utilized. In this paper, the rationale of the sequence of programming techniques and languages taught at a Technology Management Department, as well as the decisions that have been made for a smoother transition from the imperative to the object-oriented programming technique in terms of the environments and the teaching approaches used are presented. Furthermore, students' replies in a questionnaire regarding their difficulties with this sequence of programming techniques and learning programming in general are analyzed.
Integrating Serbian Public Data into the LOD Cloud
Valentina Janev, Uroš Miloševic, Mirko Spasic, Sanja Vraneš, Jelena Milojkovic and Branko Jirecek
Linked Open Data (LOD) is a growing movement for organizations to make their existing data available in a machinereadable format. There are two equally important viewpoints to LOD: publishing and consuming. This article analyzes the requirements for both sub-processes and presents an example of publishing statistical data in RDF format and integrating the data into the LOD cloud via the PublicData.eu portal. In particular, it discusses the establishment of the Serbian CKAN metadata repository that serves for publishing open governmental data from Serbia, as well as a source catalogue for the PublicData.eu portal. Furthermore, by using an illustrative case study of the Statistical Office of the Republic of Serbia, it elaborates the adaption of the LOD2 Stack for analysis and dissemination of official statistics information.
Growth rate analysis of e-Government development
Kiril Kiroski, Marjan Gushev, Magdalena Kostoska and Sasko Ristov
This paper addresses growth rate of e-Government sophistication level by comparing the corresponding e-Government services’ benchmarks carried out in European countries. We have defined a methodology based on new indicators that define growth rate of e-Government development. This analysis was initiated by extensive amount of information about sophistication level of e-Government services for individual countries and is used to measure, analyze and compare the growth. We introduced two indicators: annual growth and growth period. We have also defined clusters about typical behavior of each indicator and show how obtained results define categorization.
Risks affecting the development of the information society in the Republic of Moldova: insights from a Delphi survey
Horatiu Dragomirescu, Ion Tighineanu
In the digital age, the development of information societies, far from being a linear process, is exposed to risks that are considerable in countries with emerging economies, including South-Eastern European ones. Foresight studies are recognised as being useful in generating the kind of actionable knowledge required by exerting multi-stakeholder governance of such societal processes in a precautionary way. This paper presents the outcomes of a first round of a Delphi survey undertaken in 2011 on the risks affecting the development of the information society in the Republic of Moldova; a customised methodology was adopted in designing the questionnaire, most items being peculiar to the decision-type Delphi. The survey outcomes indicated that Republic of Moldova's information society actually reached a mid-range stage between "disarticulated" and "world leader", on a scale based upon the International Telecommunications Union's "8 Cs framework". The development of the country's information society was rated as a top priority of the current public agenda that also stands in the mediumand long-term future and is steadily supported by the state, although mainly in declarative terms. The most severe risk identified is that of the country's research system and higher education system keeping functioning in a barely survival regime, under-financed and loosely coupled; in turn, the main vulnerability deemed to affect Republic of Moldova's participation in the European Union's 7th Framework Programme for Research and Technological Development (20072013) consist of the insufficient attractiveness of income and academic career prospects domestically available to young professionals.
A Novel Algorithm for an Image Processing System in Entomology
Marjan Kindalov, Ana Madevska Bogdanova and Zarko Erakovic
This paper presents a novel algorithm for localization of characteristic symmetrical parts of an image. The algorithm is developed in order to recognize the pupae images of the insects Bemisia tabaci and Trialeurodes vaporariorum, but the generic nature enables its use in different domains. This novel Symmetrical self-filtration algorithm (SSF) is based on the template matching algorithm and utilizes the symmetrical nature of the images. Its purpose is to enhance the outcome of the template matching process.
Privacy Aware eLearning enviroments based on Hippocratic database principles
Jasmin Azemovic
Ensuring privacy in modern information systems is of primary importance for the users of these environments. Use and trust of users certainly depends on the degree of privacy. Solution for the above mentioned problems can be found in application of the „Hippocratic Databases – HDB concept". The idea is inspired by the basic principles of Hippocratic Oath to be applied on the databases in order to provide data privacy and confidentiality. Implementation and advantages of this concept have been researched for the needs of business intelligence systems and health information systems, but not of eLearning systems, until now. We have created a prototype model of e-learning environment that fully implements the principles of the HDB database. In order to prove the usability and viability of the model, we compared the performance of the production eLearning system with prototype model. The results of these studies are found in this research paper.
Correlation between Soft Organizational Features and Development of ICT Infrastructure
Mladen Cudanov, Ivan Todorovic and Ondrej Jaško
This paper aims to research interrelationships between management style as soft organizational trait and development of ICT infrastructure in company as a prerequisite for corporate ICT adoption. Among large set of factors influencing rate of corporate ICT infrastructure development and adoption, factors of organizational nature are neglected in literature and from practical focus. In particular, soft organizational factors like managerial styles, skills, shared values or staff traits gain less attention than hard factors like organizational structure, processes or strategy, though existing research shows positive correlation between ICT adoption and soft organizational factors. Empirical background for our research is coming from case analysis of 78 enterprises in the Balkan countries. Theoretical background in gained from previous research of connections between managerial styles, dominant management orientation and other organizational traits and adoption of ICT. Quantitative data extracted from that analysis is calculated through Composite indicator of ICT infrastructure, advanced version of Composite Index of ICT Adoption.
Challenging Issues of UCON in Modern Computing Environments
Christos Grompanopoulos, Ioannis Mavridis
Usage CONtrol (UCON) is a next generation access control model enhanced with capabilities presented in trust and digital rights management. However, modern computing environments are usually introducing complex usage scenarios. Such a complexity results in involving a large number of entities and in utilizing multi party contextual information during the decision making process of a particular usage. Moreover, usage control is demanded to support novel access modes on single or composite resources, while taking into account new socio-technical abstractions and relations. In this paper, a number of challenging issues faced when UCON is applied in modern computing environments are highlighted through the utilization of representative usage scenarios. The results of this study are revealing various limitations in contextual information handling, lack to support complicated usage modes of subjects on objects, and weaknesses in utilizing information concerning previous or current usages of system resources.
Comparison of Information Retrieval Models for Question Answering
Jasmina Armenska, Katerina Zdravkova
Question Answering Systems (QAS) are an important research topic triggered and at the same time stimulated by the immense amount of texts available in digital form. As the quantity of natural language information increases, the necessity of new methods to precisely retrieve the exact information from massive textual databases becomes inevitable. Although QAS have already been well explored, there are still many aspects to be solved, particularly those which are language specific. The main goal of the research presented in this paper was to compare three proven information retrieval (IR) models in order to accurately determine the relevant documents which contain the correct answer to questions posed in Macedonian language. It was accomplished using a real-life corpus of lectures and related questions existing in our e-testing system. In order to compare the results, we designed a small system capable of learning the correct answer. We revealed that the modified vector space model is the most suitable for our collection. The results we obtained are promising and they encouraged us for further improvement adopting some of the existing IR models, or even proposing a new one.
Efficient dataset size reduction by finding homogeneous clusters
Stefanos Ougiaroglou, Georgios Evangelidis
Although the k-Nearest Neighbor classifier is one of the most widely-used classification methods, it suffers from the high computational cost and storage requirements it involves. These major drawbacks have constituted an active research field over the last decades. This paper proposes an effective data reduction algorithm that has low preprocessing cost and reduces storage requirements while maintaining classification accuracy at an acceptable high level. The proposed algorithm is based on a fast pre-processing clustering procedure that creates homogeneous clusters. The centroids of these clusters constitute the reduced training-set. Experimental results, based on real-life datasets, illustrate that the proposed algorithm is faster and achieves higher reduction rates than three known existing methods, while it does not significantly reduce the classification accuracy.
Discovery and Evaluation of Students' Profiles with Machine Learning
Evis Trandafili, Alban Allkoçi, Elinda Kajo and Aleksandër Xhuvani
Higher education institutions are overwhelmed with huge amounts of information regarding student's enrollment, number of courses completed, achievement in each course, performance indicators and other data. This has led to an increasingly complex analysis process of the growing volume of data and to the incapability to take decisions regarding curricula reform and restructuring. On the other side, educational data mining is a growing field aiming at discovering knowledge from student’s data in order to thoroughly understand the learning process and take appropriate actions to improve the student’s performance and the quality of the courses delivery. This paper presents a thorough analysis process performed on student’s data through machine learning techniques. Experiments performed on a very large real-world dataset of students performance on all courses of a university, reveal interesting and important students profiles with clustering and surprising relationships among the courses performance with association rule mining.
Parameterized Verification of Open Procedural Programs
Aleksandar Dimovski
This paper describes a concrete implementation of a gamesemantics based approach for verification of open program terms parameterized by a data type. The programs are restricted to be data-independent with respect to the data type treated as a parameter, which means that the only operation allowed on values of that type is equality testing. The programs can also input, output, and assign such values. This provides a method for verifying a range of safety properties of programs which contain data-independent infinite types. In order to enable verification of programs with arbitrary infinite (integer) types, the proposed method can be extended by combining it with an abstraction refinement procedure. We have developed a tool which implements this method as well as its extension, and we present its practicality by several academic examples.
Performance Study of Matrix Computations using Multi-core Programming Tools
Panagiotis Michailidis, Konstantinos Margaritis
Basic matrix computations such as vector and matrix addition, dot product, outer product, matrix transpose, matrix vector and matrix multiplication are very challenging computational kernels arising in scientific computing. In this paper, we parallelize those basic matrix computations using the multi-core and parallel programming tools. Specifically, these tools are Pthreads, OpenMP, Intel Cilk++, Intel TBB, Intel ArBB, SMPSs, SWARM and FastFlow. The purpose of this paper is to present an unified quantitative and qualitative study of these tools for parallel matrix computations on multicore. Finally, based on the performance results with compilation optimization we conclude that the Intel ArBB and SWARM parallel programming tools are the most appropriate because these give good performance and simplicity of programming. In particular, we conclude that the Intel ArBB is a good choice for implementing intensive computations such as matrix product because it gives significant speedup results over the serial implementation. On the other hand, the SWARM tool gives good performance results for implementing matrix operations of medium size such as vector addition, matrix addition, outer product and matrix vector product.
Recent advances delivered by HTML 5 in mobile Cloud Computing applications: a survey
Stelios Xinogalos, Kostas Psannis and Angelo Sifaleras
With the explosive growth of the mobile applications and emerging of Cloud Computing (CC) concept, Mobile Cloud Computing (MCC) has been introduced to be a potential technology for mobile services. MCC refers to an infrastructure where both the data storage and the data processing happen outside of the mobile device. One of the technologies that will advance MCC is the latest version of the Web's markup language, HTML 5. In this paper, we present a survey of new HTML 5 features with a focus on the enhancement of the current MCC limitations. Specifically, we present the most important features of HTML 5 organized in different categories and their contribution in the deployment of MCC applications. Finally, the results of the research carried out on evaluating HTML 5 in terms of a wide range of applications and specifications are reviewed.
A Parallel Processing of Spatial Data Interpolation on Computing Cloud
Vladimír Siládi, Ladislav Huraj, Eduard Vesel and Norbert Polcák
In a short span of time, cloud computing has grown, particularly for commercial web applications. But the cloud computing has the potential to become a greater instrument for scientific computing as well. A pay-as-you-go model with minimal or no upfront costs creates a flexible and cost-effective means to access compute resources. In this paper, we carry out a study of the performance of the spatial data interpolation of depth of the snow cover on the most widely used cloud infrastructure (Amazon Elastic Compute Cloud). The main characteristic of the interpolating computing is the fact that it is time-consuming and data intensive; therefore utilizing parallel programming paradigm is eligible. The geoprocessing is realized on two configuration provided by Amazon EC2 and the results as well as performance of the computing is presented in the article.
A Hoare-Style Verification Calculus for Control State ASMs
Werner Gabrisch, Wolf Zimmerman
We present a Hoare-style calculus for control-state Abstract State Machines (ASM) such that verification of control-state ASMs is possible. In particular, a Hoare-Triple {φ}A{ψ} for an ASM A means that if an initial state I satisfies the precondition φ and a final state F is reached by A, then the final state satisfies the postcondition ψ. While it is straightforward to generalize the assignment axiom of the Hoare-Calculus to a single state transition, the composition of Hoare-Triples is challenging since typical programming language concepts are not present in ASMs.
Information System Monitoring and Notifications using Complex Event Processing
Filip Nguyen, Tomáš Pitner
Complex Event Processing (CEP) is a novel approach how to process streaming events and extract information that would otherwise be lost. While tools for CEP are available right now, they are usually used only for a limited number of projects. That is disappointing, because every Enterprise Information System (EIS) is producing a high number of events, e.g. by logging debug information, and industry is not taking an advantage of CEP to make these information useful. We pick two concepts that seems to be from a different category notifications a ubiquitous way how to notify user of an EIS and EIS monitoring. With notifications we define a new abstraction upon notifications with respect to a separation of concerns to create a more maintainable implementation. In our research we show that this is a typical example of a possible future application of CEP and that the industry requires specific service oriented tools that can be used for both, notifications and monitoring. When these service oriented tools would be introduced into the industry it would promote EIS maintainability and extensibility.
Statically Typed Matrix
Predrag S. Rakic, Lazar Stricevic and Zorica Suvajdzin Rakic
Contemporary C++ matrix libraries model matrices as if the only relevant characteristic of matrix type is its element type and number of dimensions. Actual size of each dimension is usually completely disregarded in the model. Dimension size is treated as dynamic characteristics of matrix object, making a matrix type neither static nor dynamic, but something in between. Logical consequence of data model inconsistency is more or less noticeable discrepancy in the interface design. Matrix model in which element type, number of dimensions and size of each dimension are all treated as equally important characteristic of matrix type is presented in this paper. Proposed matrix model is implemented in the C++ proof–of–concept template library called Typed Matrix Library (TML). Matrices in TML are statically typed objects. Modeling matrices this way enables compile–time correctness verification in matrix operations. At the same time, this approach incurs no run–time overhead compared to the classical one. Arguably, linear algebra programs based on the presented model require no additional information/dependencies to be supplied to the program code than developers are already aware of, thus no extra developers’ effort is required in order to use matrices based on this model.
Formal Modelling of a Bio-Inspired Paradigm Capable of Exhibiting Emergence
Konstantinos Rousis, George Eleftherakis, Ognen Paunovski and Tony Cowling
The Emergent Distributed Bio-Organization (EDBO) case study has demonstrated the potential of harnessing emergent properties in Artificial Distributed Systems (ADS). Introducing biologically inspired attributes and functions to the microscopic level (i.e. the biobots) has allowed for the emergence of global-level behaviours such as network scalability, availability and super-node formations. Experience gained during work with EDBO was further incorporated into a disciplined framework for harnessing emergence in ADS. In an attempt to increase confidence in this framework and the results gathered so far by EDBO simulations, this paper performs a feasibility study on formally modelling, documenting, and validating the EDBO case study. By using the X-machine formalism, this step further serves as a preliminary, transition, step for running EDBO on FLAME; an agent-based simulation platform built upon the theoretical foundation of X-machines.
Community Detection and Analysis of Community Evolution in Apache Ant Class Collaboration Networks
Miloš Savic, Miloš Radovanovic and Mirjana Ivanovic
In this paper we investigate community detection algorithms applied to class collaboration networks (CCNs) that represent class dependencies of 21 consecutive versions of the Apache Ant software system. Four community detection techniques, GirvanNewman (GN), Greedy Modularity Optimization (GMO), Walktrap and Label Propagation (LP), are used to compute community partitions. Obtained community structures are evaluated using community quality metrics (interand intracluster density, conductance and expansion) and compared to package structures of analyzed software. In order to investigate evolutionary stability of community detection methods, we designed an algorithm for tracking evolving communities. For LP and GMO, algorithms that produce partitions with higher values of normalized modularity score compared to GN and Walktrap, we noticed an evolutionary degeneracy – LP and GMO are extremely sensitive to small evolutionary changes in CCN structure. Walktrap shows the best performance considering community quality, evolutionary stability and comparison with actual class groupings into packages. Coarse-grained descriptions (CGD) of CCNs are constructed from Walktrap partitions and analyzed. Results suggest that CCNs have modular structure that cannot be considered as hierarchical, due to the existence of large strongly connected components in CGDs.
Optimising Flash Non-Volatile Memory using Machine Learning: A Project Overview
Tom Arbuckle, Damien Hogan and Conor Ryan
While near ubiquitous, the physical principles of Flash memory mean that its performance degrades with use. During fabrication and operation, its ability to be repeatedly programmed/erased (endurance) needs to be balanced with its ability to store information over months/years (retention). This project overview describes how our modelling of data we obtain experimentally from Flash chips uniquely allows us to optimise the settings of their internal configuration registers, thereby mitigating these problems.
A Set-Based Approach to Negotiation with Concessions
Costin Badica, Amelia Badica
Concessions made to opponents are a well-known mechanism for self-improving your own negotiation position towards reaching an agreement in bilateral and multilateral negotiation. Probably the best well-known negotiation protocol that employs concessions is the Monotonic Concession Protocol (MCP). In this paper we propose a generalization of the bilateral MCP negotiation protocol by conceptualizing agent preferences and offers using sets of deals.
The modification of genetic algorithms for solving the balanced location problem
Vladimir Filipovic, Jozef Kratica, Aleksandar Savic and Ðorde Dugošija
In this paper is described the modification of the existing evolutionary approach for Discrete Ordered Median Problem (DOMP), in order to solve the Balanced Location Problem (LOBA). Described approach, named HGA1, includes a hybrid of Genetic Algorithm (GA) and a well-known Fast Interchange Heuristic (FIH). HGA1 uses binary encoding schema. Also, new genetic operators that keep the feasibility of individuals are proposed. In proposed method, caching GA technique was integrated with the GFI heuristic to improve computational performance. The algorithm is tested on standard instances from the literature and on large-scale instances, up to 1000 potential facilities and clients, which is generated by generator described in [5]. The obtained results are also compared with the existing heuristic from the literature.
Reservoir Sampling Techniques in Modern Data Analysis
Anže Pecar, Miha Zidar and Matjaz Kukar
Reservoir sampling is an interesting statistical sampling technique, developed almost 40 years ago in order to enable analysis of large scale data (for that time) while utilizing limited computer memory resources. We present an overview of frequently used reservoir sampling techniques and discuss how they can be used for learning from data streams. While they are not perfect for all scenarios, they can easily be modified for many purpose, and also find place in surprisingly useful modern data analysis approaches.
A Software Tool that Helps Teachers in Handling, Processing and Understanding the Results of Massive Exams
Marko Mišic, Marko Lazic and Jelica Protic
During the last decade, at the University of Belgrade, School of Electrical Engineering, various tools have been developed and used for automation of preparation, grading and results processing of programming exams. Those exams consist of multiple-choice questions that represent sophisticated programming puzzles, and coding problems that require students to write solutions to the given problem. In order to get better understanding of student's achievements, there was a need for statistical analysis of the exam results. We have developed a new tool in aforementioned tool chain that processes exam results and presents various statistical parameters in tables and graphs. The purpose of the project is to help teachers in handling massive exams with more than 500 students, and to provide them with information on students' knowledge in various parts of the programming course, specific algorithms etc.
Framework for open data mining in e-government
Petar Milic, Nataša Veljkovic and Leonid Stoimenov
Data mining in e-government is the process of translating data from government web site in useful knowledge that can provide various types of support in decision making. Data mining can be applied to any type of data, but we have chosen to use this technique on open government data. Open data is a new concept in the development of e-government. It stands for a public sector information which is available for distribution and usage without any restrictions. In this paper we will give an overview of a framework for open data mining and present an example of usage of this framework for data mining on government open data portals.
Extracting drug adverse and beneficial reactions in pediatric population from healthcare social network
Jelena Hadzi-Puric, Jeca Grmusa
Popular Web forums offer parents and doctor an opportunity to discuss and share healthcare information, about symptoms of diseases, diagnosis and treatment in pediatric population, including side effects. The objectives of this paper are to explore extraction of drug reactions from a healthcare social network, review the differences between language from biomedical literature and patient vocabulary, to introduce appropriate database model for presenting qualitative features of approved and withdrawn drugs. We have also designed an extensible database of comments that regularly update with new drug classes and patient messages. The application is particularly useful in those countries that can provide pediatric drugs to patients without prescription.
S-Suite: A Multipart Service Oriented Architecture for The Car Rental Sector
Margarita Karkali, Michalis Vazirgiannis
The car rental business is one with awesome budgets due to its popularity in tourism and business trips worldwide. The broker service provider model is the dominant one with the brokers searching and negotiating with several providers for each reservation request. This implies a workload that would overwhelm the participating parts. Moreover reservations life cycle in the aforementioned model is a complex process bearing exhaustive details and constraints that have to be met until a reservation is confirmed and deployed. In this paper we propose S-Suite, a SOA architecture fully implemented and operational that mediates among brokers and service providers. It handles the full life cycle of reservations enabling automatic reservation treatment incorporating the most enhanced functional features demanded by brokers and service providers. The benefits of the system are multiple: a. efficiency and transparency, b. optimal matching among reservations demands and service offers at a local level.
Storing XML Documents in Databases Using Existing Object-Relational Features
Dusan Petkovic
One of main research areas concerning XML documents is how to store them. Researchers usually suggest the use of relational databases for this task. However, object-relational databases with their extended data model are better for this purpose, because the “flat” relational model is not ideal for mapping of hierarchic structure of XML. Several papers have been published, which describe how structures and constraints of XML documents can be mapped to object-relational databases (ORDBs). However, their results cannot be applied in practice, because the available technology is usually not aware of object-oriented concepts, which are used. In this paper we propose mapping rules that are practicable for existing ORDBMSs in general. Specifically, we analyze object-oriented concepts implemented in an existing ORDBMS (DB2) and show how XML Schema parts can be mapped to them. We also perform a case study to illustrate mappings according to our rules. Although the paper discusses the use of XML Schema instances as metadata, existing DTD instances can also be used, because they can be easily transformed to corresponding XML Schema documents. To our knowledge this is the first such proposition for the database system mentioned above.
Insider Threats in Corporate Environments:A case study for Data Leakage Prevention
Veroniki Stamati-Koromina, Christos Ilioudis, Richard Overill, Christos Georgiadis and Demosthenes Stamatis
Regardless of the established security controls that organizations have put in place to protect their digital assets, a rise in insider threats has been observed and particularly in incidents of data leakage. The importance of data as corporate assets is leading to a growing need for detection, prevention and mitigation of such violations by the organisations. In this paper we are investigating the different types of insider threats and their implications to the corporate environment ,with specific emphasis to the special case of data leakage. Organisations should evaluate the risk they are facing due to insider threats and establish proactive measures towards this direction. In response to the challenging problem of identifying insider threats, we design a forensic readiness model, which is able to identify, prevent and log email messages, which attempt to leak information from an organisation with the aid of steganography.
A BIBO ontology extension for evaluation of scientific research results
Bojana Dimic Surla, Milan Segedinac, Dragan Ivanovic
The paper addresses the issue of semantic description of the bibliographic data in RDF and OWL. More precisely, the paper focuses on presenting scientific research results together with their evaluation and quantitative expression. The existing ontologies that are commonly used for describing bibliographic data are Dublin Core, FOAF and BIBO. The paper gives a proposal for presenting data for evaluation of scientific results as an extension of the BIBO ontology. The purpose of the research in this paper is going towards research management system integration in terms of evaluation of scientific research results for individuals and institutions.
Collective Information Extraction using First-Order Probabilistic Models
Slavko Žitnik, Lovro Šubelj, Dejan Lavbic, Aljaž Zrnec and Marko Bajec
Traditional information extraction (IE) tasks roughly consist of named-entity recognition, relation extraction and coreference resolution. Much work in this area focuses primarily on separate subtasks where best performance can be achieved only on specialized domains. In this paper we present a collective IE approach combining all three tasks by employing linear-chain conditional random fields. The usage of probabilistic models enables us to easily communicate between tasks on the fly and error correction during the iterative process execution. We introduce a novel iterative-based IE system architecture with additional semantic and collective feature functions. Proposed system is evaluated against real-world data set, introduced in the paper, and results are better over traditional approaches on two tested tasks by error reduction and performance improvements.
Communication in Machine-to-Machine Environments
Iva Bojic, Damjan Katusic, Mario Kusek, Gordan Jezic, Sasa Desic and Darko Huljenic
It has been estimated that by the end of 2020 there will be 50 billion connected devices in Machine-to-Machine (M2M) networks. Such projections should encourage us to deal with the corresponding problems in heterogeneous M2M systems. First of all, devices can communicate through different access technologies (e.g. wireline, 2G/3G, WiFi, Bluetooth) and their communication can be classified as direct or indirect, internal or external. In this paper we explain differences between those types of communication and propose a new identification scheme that allows M2M devices to establish communication in every possible way. Secondly, there is a problem of device hardware and software diversity. To overcome this problem, we propose the usage of the Open Service Gateway Initiative (OSGi) framework.
SSQSA architecture
Zoran Budimac, Gordana Rakic and Milos Savic
The aim of this paper is to describe architecture of the software system called Set of Software Quality Static Analyzers (SSQSA). The main aim of SSQSA is to provide some static software analyzers to ensure, check, and consequently increase, the quality of software products. Its main characteristic is the language independency which makes it more usable than many other similar systems.
Rule-based Assignment of Comments to AST Nodes in C++ programs
Tamás Cséri, Zalán Szugyi and Zoltán Porkoláb
Comments are essential components of programming languages: they preserve the developer’s intentions, help the maintainers to understand hidden concepts, and may act as a source of automatic documentation generation. However most of the software maintenance tools (refactorers, slicing and analyser tools) ignore them therefore they lose an important part of information about the software. One of the reasons why tools neglect comments is that there is no single well-defined location in the software’s AST where to place them. The relationship between the program’s control structure and the comments depend on code conventions and human habits. Our research part of a project to develop a software maintenance tool focuses on the code comprehension process of large legacy C++ projects and heavily utilize code comments. We evaluated the commenting behaviour used in large projects and categorized the major patterns. We found that these patterns are strongly correlating in a single project. In the paper we present a method to find the correct place of the comments in the AST-based on project-specific rules. We evaluate our method and test it against open source C++ projects.
Evaluation of Tools for Automated Unit Testing for Applications in OSGi
František Geletka, Ladislav Samuelis and Jozef Vojtko
This work provides an overview and comparison of the currently available tools for testing in OSGi environment such as Pax-Exam, JUnit4OSGi or Spring DM. We developed a plugin for JUnit4OSGi that allows to generate basic skeletons of tests from ordinary Java Eclipse projects. These tests can operate directly in the Eclipse enviroment using the modified SwingGUI runner.
Adoption of object-oriented software metrics for ontology evaluation
Rok Žontar, Marjan Hericko
Object-oriented software metrics are well established and widely acknowledged as a measure of software quality. The aim of our research is to analyze the potential use of some of these metrics for ontology evaluation. In this paper we present the conclusions of our feasibility study, in which we investigated and assessed the ability of software metrics to evaluate ontologies. Based on a review of existing literature, we chose a set of 18 object-oriented software metrics. These were categorized into four groups and analyzed according to their original definitions.
Applying MDA in developing intermediary service for data retrieval
Danijela Boberic-Krsticev
In this paper, service for data retrieval from existing library management system is described. This service intermediates between library management system which provides data and system which requires that data. The main idea is that this service should support various protocols for data retrieval. Also, this service should be flexible for future update and simple enough for integration into any library management system. Service presented in this paper is developed by using Model Driven Architecture (MDA) approach. Different models (proposed by MDA) of this service are presented in this paper. Models are presented by using UML 2.0 specification. Transformations from models to Java programming code are done by using AndroMDA framework.
Data model for consortial circulation in libraries
Danijela Tešendic
This paper deals with a data model of a software system used to manage library patrons. This system is known as a circulation system and is usually developed within a library management system. The paper discusses functionalities of a circulation system that allows circulation at the level of library consortium. Based on these functionalities, a data model is made that contains all necessary data for managing library users, both local users and users from other libraries within the consortium. The presented model was used in the development of one particular circulation system, but consideration of the following can be useful in the development of any other circulation system as well.
Implementation and evaluation of a sleep-proxy for energy savings in networked computers
Enida Sheme, Neki Frasheri, Marin Aranitasi
In enterprise networks, idle desktop machines rarely sleep, because users and IT departments want them to be always accessible. While some solutions have been proposed, few of them have been implemented even more evaluated in real network environments. In this paper, we implement and evaluate a sleep proxy system, based on existing proposed architecture for this Proxy. This system is tested in 6 different PC machines of a real network. The results of the experiments show that machines can sleep almost 55% of the experimenting time (which is translated into energy savings) while maintaining their network accessibility to user satisfaction. However, there is a need of “cooperation” between IT procedures and sleep proxy system in order to gain better performance and thus less dissipated energy.
Layout Proposal for One-Handed Device Interface
Oliver Sipos, Ivan Peric and Dragan Ivetic
The paper presents a novel interface layout design suited for thumb navigation on one-handed device. The layout supports common tasks for this class of devices with minimal cognitive and physical effort and was developed through three iterations of designing and testing layouts. First design proposal was created based on existing papers and current trends in the field of mobile device design. Second and third proposals were made by altering previous proposal according to test results analysis. Finally, by combining experiences of other people who worked on similar studies, theoretical principles of interface design and empirically gained knowledge about this problem, design proposal was made. Proposal that will, hopefully, become fundamental in designing layouts for small hand held devices with touch sensitive screens.
An Evaluation of Java Code Coverage Testing Tools
Elinda Kajo Mece, Megi Tartari
Code coverage metric is considered as the most important metric used in analysis of software projects for testing. Code coverage analysis also helps in the testing process by finding areas of a program not exercised by a set of test cases, creating additional test cases to increase coverage, and determine the quantitative measure of the code, which is an indirect measure of quality. There are a large number of automated tools to find the coverage of test cases in Java. Choosing an appropriate tool for the application to be tested may be a complicated process. To make it ease we propose an approach for measuring characteristics of these testing tools in order to evaluate them systematically and to select the appropriate one.
Smart UI for New-age Smartphones with Touchscreen
Ivan Peric, Oliver Sipos
Even the quick peek at the specs of upcoming or currently most popular mobile phones makes you come to a conclusion that they are getting physically thinner, yet taller and wider even though technology is getting smaller. Screen size is the main reason for this. As many manufacturers surpass 3.5” display barrier to bring more information and options to the screen, challenge is laid upon the software to keep the benefits of one-handed interaction available to users. In this paper, we will focus on problem that appears on some of the most popular mobile platforms: application menu. Combining usage statistics, minimalistic design and ergonomics point of view, we created solution that is generic, but still follows most of the major platforms design guidelines.
Analyses of QoS Routing Approach and the starvation`s evaluation in LAN
Ariana Bejleri, Igli Tafa, Aleksander Biberaj, Ermal Beqiri
and Julian Fejzaj
This paper gives a survey in QoS Routing Architecture implemented by Dijkstra`s algorithm. The performance of QoS Routing architecture is evaluated by made a comparison between the Shortest Path Routing and QoS one. A very important feature in QoS routing are the conditions for elimination of starvation. Experimentally we have evaluated the number of packets delivery from source node to destination one in QoS Routing architecture with high and low priority classes based on ns-2 simulator.
The Evaluation of Performance in Flow Label and Non Flow Label Approach based on IPv6 technology
Ariana Bejleri, Igli Tafa, Ermal Beqiri, Julian Fejzaj and
Aleksander Biberaj
In this paper, we want to evaluate the performance of two broadcasters with Flow label and Non flow label approach. Experimentally we have presented that the throughput utilization for each broadcaster with Flow Label approach which is implemented in MPLS Routing Technology is 89,95%. This result is better than Non Flow Label approach which is evaluated at 92,77%. The aim of this paper is to present that MPLS Routers performance is better than IP routers especially in Throughput Utilization, Low Level of Drop Packet Rate and time delay. The second technology is implemented in IP routing. Experimentally we have generated some video stream packets between 2 broadcasters with an arrange of router nodes. Experiments are performed by using ns-2 simulator.
Protection of web applications using Aspect Oriented Programming and performance evaluation
Elinda Kajo Mece, Lorena Kodra, Enid Vrenozi and Bojken Shehu
Web application security is a critical issue. Security concerns are often scattered through different parts of the system. Aspect oriented programming is a programming paradigm that provides explicit mechanisms to modularize these concerns. In this paper we present an Aspect Oriented system for detecting and prevent common attacks in web applications like Cross Site Scripting (XSS) and SQL Injection and evaluate its performance by measuring the overhead introduced into the web application. The results of our tests show that this technique was effective in detecting attacks while maintaining a low performance overhead.
Fuzzy XML data editor supporting XML Schema
Goran Panic, Miloš Rackovic and Srdan Škrbic
Standard XML format does not allow for imprecise or incomplete values. This is one of the requests imposed on this format by many real-word usages. Using fuzzy logic in order to introduce indefiniteness in XML has been researched in several different papers in the last decade. While these papers were mostly focused on setting up theories and the syntax, this paper has practical usage as its main goal. Application called ’Fuzzy XML editor’ was created and described in this research. This editor is intended to work with fuzzy XML and to support XSD and DTD schemas.
Plug-in components for interactive geography in "Geometrijica" DGS
Djordje Herceg, Vera Herceg and Davorka Radakovic
The use of dynamic geometry software (DGS) in all kinds of mathematical games has become a widespread phenomenon. Rich features and availability of free DGS, such as GeoGebra, have caused a growing interest for developing teaching materials for subjects other than matematics. We present the teaching materials that we developed in GeoGebra for the subject of geography at elementary and middle school level. An interactive component that we developed, aimed specifically at applications in computer geography, is also discussed.
Program Assessment via a Capstone Project
John Galletly, Dimitar Christozov, Volin Karagiozov and Stoyan Bonev
This paper describes an approach that has been adopted at the American University in Bulgaria in order to assess the Computer Science degree program for accreditation purposes.
Logical Representation Of Dependencies Of Items And The
Complexity Of Customer Sets
Demetrovics János, Hua Nam Son and Gubán Ákos
The problem of discovering of frequent market baskets and association rules has been considered widely in literatures of data mining. In this study, by using the algebraic representation of market basket model, we pro pose a concept of logical constraints of items in an effort to detect the logical relationships hidden among them. Via the relationships of the propositional logics and logical constraints of items we propose also the concept of the complexity of customers. As a result we show that every set of customers can be characterized by a logical constraint and can be divided into different blocks that are characterized by quite simple logical constraints. In the natural way the complexity of a customer set is defined as the number of the blocks it contains.
Cloud Computing Interoperability Approaches –Possibilities and Challenges
Magdalena Kostoska, Marjan Gusev, Sasko Ristov and Kiril Kiroski
The Cloud Computing Interoperability (CCI) is a hot research topic and has been addressed by many scientists, architects, groups etc. A lot of different approaches and possible solutions are published, but there is no accepted standard or model yet. This paper is a survey of the most influential published CCI models and discusses their possibilities and challenges. The accent in this paper is set to analysis of the Software as a Service (SaaS) CCI model based on adapters. The current state of the cloud computing market and the results of recent Cloud Computing (CC) market surveys are also included in our analysis. The presented conclusion addresses the increasing trend in the usage of cloud computing and the lack of visible result to achieve cloud computing interoperability. So the next logical step is to create adapters to achieve interoperability at the SaaS level.
A Review of Disc Scrubbing and Intra Disc Redundancy techniques for reducing data loss in disc FileSystems
Genti Daci, Aisa Bezhani
Because of high demand that applications and new technologies have today for data storage capacity, more disk drives are needed, resulting in increased probability to inaccessible sectors, referred as Latent Sector Errors (LSE). Aiming to reduce data loss by LSE, two main techniques are extensively studied lately: Disk Scrubbing, which performs reading operations during idle periods on systems to search for errors and Intra Disk Redundancy which is based on redundancy codes. This paper reviews and discusses the problems of LSE and the main causes that lead to LSE, its properties and their correlation on nearline and enterprise disks. Focusing on reducing LSE with regards to security, processing overhead and disk space, we analyze and compare the latest techniques: Disc Scrubbing and Intra Disk Redundancy aiming to highlight the issues and challenges according to different statistical approaches. Furthermore, based on previous evaluation results, we discuss and introduce the benefits on using both schemes simultaneously: combining different IDR coding schemes with Accelerated Scrubbing and Staggered Scrubbing in particular regions of disc drives that store crucial data during idle periods. Finally, we discuss and evaluate from an extended statistical analysis the best ways on how reduce data loss with a minimum impact on system performance.
Robust Moldable Scheduling Using Application Benchmarking For Elastic Enviornments
Ibad Kureshi, Violeta Holmes and David Cooke
In this paper we present a framework for developing an intelligent job management and scheduling system that utilizes application specific benchmarks to mould jobs onto available resources. In an attempt to achieve the seemingly irreconcilable goals of maximum usage and minimum turnaround time this research aims to adapt an open-framework benchmarking scheme to supply information to a mouldable job scheduler. In a green IT obsessed world, hardware efficiency and usage of computer systems becomes essential. With an average computer rack consuming between 7 and 25 kW it is essential that resources be utilized in the most optimum way possible. Currently the batch schedulers employed to manage these multi-user multi-application environments are nothing more than match making and service level agreement (SLA) enforcing tools. These management systems rely on user prescribed parameters that can lead to over or under booking of compute resources. System administrators strive to get maximum “usage efficiency” from the systems by manual fine-tuning and restricting queues. Existing mouldable scheduling strategies utilize scalability characteristics, which are inherently 2dimensional and cannot provide predictable scheduling information. In this paper we have considered existing benchmarking schemes and tools, schedulers and scheduling strategies, and elastic computational environments. We are proposing a novel job management system that will extract performance characteristics of an application, with an associated dataset and workload, to devise optimal resource allocations and scheduling decisions. As we move towards an era where on-demand computing becomes the fifth utility, the end product from this research will cope with elastic computational environments.
<L|ETAP> model - for an Adaptive Tutoring System
Eugenia Kovatcheva
Nowadays the interest to the adaptive intelligent eLearning systems increases. There are different kind of adaptations one to the content, other to the learning process or to the assessment and so on. The crucial moment for the learner motivation’s is to catch their needs and possibilities and then to act, i.e. the respond from the system. The intelligent tutor (system agent(s)) has to decide the most appropriate path through the content based on the collected information for learner as: learning style, learner track through the topics, learner grades and offer further steps. The intelligent agent tutor keeps all data for every single learner, analyse them and offers next learner’s actions on the system. This paper presents a constructive model based on the learning style of the learners and their ability and how it could be implement in an intelligent eLeaning system. It should be used for self-study in formal and informal education as well as for representing the digitalized cultural and historical heritage for educational purposes.