• Nem Talált Eredményt

Balázs Krámos a , Gergely Nándor Nagy b , Julianna Oláh c

1. The Doctoral School of Computer Science and Information Technologies

The doctoral school was established in 1994 to spearhead research and development in the field of the theoretical and practical aspects of computer science and information technologies. Its focus includes research in discrete mathematics, algorithms, optimization of info-communication networks, as well as software technologies, cryptography, embedded systems, and model driven approaches, etc. The school integrates the research activities pursued in 10 different departments in the Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics. The scientific activities in the school are conducted in two main groups: (i) info-communication systems; and (ii) intelligent systems. Since 1994 more than 100 students have successfully defended their PhD theses and have been awarded with the degree. As far as the scientific outcome is concerned, the members, supervisors and students of the school have published several hundreds of high ranking journal papers and the cumulative impact factor goes beyond 2000.

2. Research and development projects supported by TÁMOP-4.2.2/B-10/1-2010-0009

The main thrust of research supported by the TAMOP project is to investigate problems which need large scale simulations and computing. These objectives include the following R&D projects: (i) GPGPU applications; (ii) distributed software systems; (iii) distributed solution of complex modeling and optimization problems; (iv) distributed evaluation and topological analysis; (v) applications of declarative languages for reasoning; (vi) studying problems and algorithms of graph theory and geometry; (vii) simulation of complex info-communication systems; (viii) automatic verification of data security protocols.

2.1 GPGPU Applications

There is an increasing need in computer graphics for efficient management of complex scenes by using parallel processing. Furthermore, a fast evaluation of complex optical effects (e.g. global illumination effects) can only be implemented on parallel architectures. In the area of medical image processing and visualization, an efficient processing of large-scale volumetric data (e.g. CT, MRI, PET) requires the application of appropriate data partitioning schemes. 3D scenes of moderate complexity and volumetric data of medium resolution can be processed even on a single Graphics Processing Unit (GPU) in real time. However, more complex models can hardly be handled efficiently because of the limited local memory of the GPU. If the data does not fit into the local memory, it should be stored in the operative memory and its partitions have to be

computational capacity of the GPU cannot be fully exploited as the bottleneck is the limited bandwidth between the operative memory and the local GPU memory. To remedy this problem, GPU clusters can be applied, where a GPU node is assigned to each data partition. Therefore, huge data partitions do not have to be transferred. Instead, the nodes send image data to each other, which represent the processed data partitions. The corresponding research mainly focuses on the minimization of the communication overhead between the nodes.

2.2 Distributed software systems

Cloud computing solutions form the foundations of future IT systems which will be able to dynamically allocate resources (CPU power, memory and storage capacity, virtual machines, etc.) according to the current load at all times. These systems are particularly useful at large companies requiring massive, yet continuously changing amount of computing capacity where private cloud infrastructure offers great savings on IT costs.

However, due to the relatively young nature of cloud computing, several competing private cloud solutions exist and none of them can be considered mature. This research project focuses on two areas. The first area addresses the interoperability issues between various private cloud providers by exploring the existing technologies and designing a common metamodel that could hide the differences between the cloud solutions. The second research area aims to develop resource scheduling algorithms for cloud infrastructures which are optimized according to various metrics. These improved algorithms would further reduce energy consumption of cloud systems by dynamically turning off unused hardware components.

2.3 Distributed solution of complex modeling and optimization problems

The major goal in this research topic is solving complex modeling and optimization problems efficiently in a parallel manner. In the project we deal with two applications network planning and data mining. Both require huge computation in practical applications. We develop new techniques for configuring routers in the current Internet such that the overall reliability and the efficiency of the network are improved. The network efficiency is either energy efficiency or increase in throughput by better using the network capacities. Besides, our goal is to develop a suite of new parallel data mining and text mining algorithms for several applications, such as automatically processing the information for medical journals and books. Last but not least we focus on "black box"

optimization algorithms that can run parallel on desktop grids. We are mainly interested in developing a scalable integer linear program solver.

2.4 Distributed evaluation and topological analysis

Pattern recognition has several differing aspects. Some of them are covered by this R&D project, which includes the fetal acoustic heart sound analysis with identifying heart murmurs; identification of various intrusion sequences in computer networks, and finally, topological analysis of archaeological inscriptions. In these three different fields, similar

pattern analysis and data mining methods can be used. The difficulty of the pattern analysis of these areas is that the measured or observed values are results of various effects, which cannot be modeled exactly. Therefore our research focuses on the applied statistical and knowledge-mining methods as statistical simulation and cluster analysis.

2.5 Applications of declarative languages for reasoning

Declarative programming languages allow high level formulation of domain problems.

Hence the programmer can focus on what needs to be solved and has to worry less about how to solve it. In particular, the built-in reasoning mechanism behind the Prolog logic programming language can be used to solve various complex reasoning problems. We are currently exploring the use of declarative programming tools for type analysis, with special emphasis on constraint programming. We are developing a static type inference tool for the Q functional programming language, which has the potential to greatly enhance programmer productivity, by discovering typical programmer errors in compilation time.

2.6 Studying problems and algorithms of graph theory and geometry

Problems pertaining to the sub-fields of graph theory and computational geometry tend to emerge virtually all the time in countless areas of Information Technology. Although efficient algorithms are already known for many of the oft-occurring problems, there are many questions left to be answered, especially when one considers concepts such as randomized and/or distributed algorithms. Specifically, our research mainly focuses on the following topics: (i) investigating how and when randomization allows for more efficient algorithms than deterministic ones, with emphasis on the data structures being used; (ii) analyzing the complexity of deterministic, randomized and/or distributed algorithms on conventional and random inputs; and (iii) devising new algorithms that improve upon these complexity bounds.

2.7 Simulation of complex info-communication systems

With the improvement of computer infrastructure and the steep growth of computational capacity at hand, simulation techniques get more and more important role in engineering design practice. Simulations provide a way to reveal bottlenecks and potential failures of the system without preparing prototypes, making the development process of complex infrastructures more effective regarding both cost and time. The basis of the simulation is the definition of differential equations describing physical laws of the system, and determining coupling relations using the proper boundary conditions, representing the interactions between the parts of the complex system. Then, by means of spatial and temporal discretization, a set of algebraic equations is obtained and to be solved. The computational complexity of the problem is characterized by the total number of degrees of freedom (DOF), which gives the number of unknowns in the equations. Naturally, to achieve more accurate results, the number of DOF has to be increased. In order to model

Selecting the discretization method properly, the solution algorithm can be parallelized with extremely good efficiency. In our case the finite element of finite volume methods can serve as an example, which can reduce communication time between the cores performing the computation. When the solution process is distributed over a number of cores, computational time decreases steeper than linear, which means a remarkable advantage for parallelization. Thus, if communication between cores does not mean too much overload, the solution process is speeded up more than linearly. This phenomenon is called super-speedup. Therefore, the capacities of a supercomputer could provide maximal efficiency for these simulations.

Quantum channels can be implemented in practice very easily and make it possible to send various types of information. Contrary to classical channels, quantum channels can be used to construct more advanced communication primitives. Recently, there has been much progress toward describing the properties of quantum channels. However, there are still many open questions. The research work focuses on the still unsolved questions of quantum information processing, quantum communications and quantum cryptography by combining statistical, mathematical, informational theoretic analysis and algorithmic tools. With the help of the proposed research work, the information transmission can be realized in a very noisy channel or network environment. In quantum communications, it is possible to use zero-capacity quantum channels for perfect information transmission.

The complete theoretical background for describing the method of capacity recovery of very noisy quantum channels is still unknown. The success of future long-distance quantum communications and global quantum key distribution systems strongly depends on the development of efficient quantum repeaters. We show that the efficiency of the quantum repeater can be increased extremely, which opens new perspectives in future long-distance quantum communications. We present a fundamentally new idea, which enhances the efficiency of the quantum repeaters. An efficient quantum error-correction technique will be developed for very noisy optical and satellite quantum channels, which can also be used in long-distance quantum communications. In the proposed research, a large set of possible quantum states, channel models, and channel probabilities have to be analyzed, which require extremely powerful computer architecture with multi-core technology.

2.8 Automatic verification of data security protocols

The security of info-communication networks and computer systems are very important issues nowadays. Conventional data security objectives, such as secrecy, integrity and authenticity, can be achieved by the application of security protocols. Unfortunately, designing security protocols is a very difficult issue, which is confirmed by the fact that critical security holes are found in many widely used security protocols. Informal analysis of security protocols and systems is error-prone, thus, it is not considered to be a reliable approach. Instead, formal analysis and systematic methods are required, which are more precise. One further advantage of using formal methods is that they enable automated verification. The goal of the research is to propose effective formal and automated verification methods which allow either proving the security of protocols and systems or detect security holes. The research covers two main topics: (i) Automated security verification of WSN transport protocols; (ii) automated verification of secure routing protocols.