Contact      Facebook      Twitter
Keynote speakers



Faculty of Sciences, Technology
Luxembourg University.
Luxembourg
Title:
Cloud computing is definitely the way to go. What are the new security requirements in the context of cloud computing? Would you trust another company to host your IT and their other customers sharing the same hardware? The presentation intends to highlight the challenges of cloud computing in terms of security and confidentiality issues. Current systems limitations will be described as well as compliance issues to security and auditing standards. Finally new generations of hardware and software based solution based on crypto-chips, ARM processors and new security protocols and services will be introduced.

Pascal Bouvry earned his undergraduate degree in Economical & Social Sciences and his Master degree in Computer Science with distinction ('91) from the University of Namur, Belgium . He went on to obtain his Ph.D. degree ('94) in Computer Science with great distinction at the University of Grenoble (INPG), France. His research at the IMAG laboratory focussed on Mapping and scheduling task graphs onto Distributed Memory Parallel Computers. Next, he performed post-doctoral research on coordination languages and multi-agent evolutionary computing at CWI in Amsterdam.DrBouvry gained industrial experience as manager of the technology consultant team for FICS (SONE) a world leader in electronic financial services. Next, he worked as CEO and CTO ofSDC, a Saigon-based joint venture between SPT (a major telecom operator in Vietnam), Spacebel SA (a Belgian leader in Space, GIS and Healthcare), and IOIT, a public research and training center. After that, DrBouvry moved to Montreal as VP Production of Lat45 and Development Director for MetaSolv Software (ORCL), a world-leader in Operation Support Systems for the telecom industry (e.g. AT&T, Worldcom, Bell Canada, etc). Dr. Bouvry is currently serving as Professor in the (CSC) research unit of the Faculty of Sciences, Technology of Luxembourg University. Pascal Bouvry is also faculty of the Interdisciplinary Center of Security, Reliability and active in various scientific committees and technical workgroups (IEEE CIS Cloud Computing vice-chair, IEEE TCSC GreenIT steering committee, ERCIM WG,ANR, COST TIST, etc.)
 

TU Dortmund University
Germany
Title:
Compared to the past, today's resource management for large computer systems has become more complex since it must consider various additional constraints, like virtualization, failure tolerance, and energy efficiency. Therefore it is not surprising that almost every conference on supercomputers or on parallel processing covers the topic resource management with one or more sessions. Although a very large number of different algorithmic approaches has already been proposed to improve efficiency of these computer systems, only very few of them are actually used in real machines, like EASY backfilling. In this talk, we discuss the reasons for this disparity. Then we suggest some rules that future work should consider in order to point out the applicability and the benefit of a new scheduling algorithm. These rules are relevant for research that emphasizes practical relevance of the presented algorithms. Exemplarily, we show the application of these rules when developing a new method to manage computing resources in the Infrastructure-as-a-Service (IaaS) model of Cloud Computing.

Uwe Schwiegelshohn received the Diploma and the Ph.D. degrees in Electrical Engineering from the TU Munich in 1984 and 1988, respectively. He was with the Computer Science department of the IBM T.J. Watson Research Center from 1988 to 1994 before becoming full Professor at TU Dortmund University. In 2008 he was appointed vice rector finance of this university. Also in 2008 he became managing director of the Government sponsored D-Grid corporation to coordinate the Grid research effort in Germany. From 2002 to 2012 he was organizer of the Workshop on Job Scheduling Strategies for Parallel Processing. In addition, he has been chairman and member of the program committees for various other conferences and workshops. His present research interests are scheduling problems, resource management for large computer systems and virtual research environments.
 

Research Scientist, Qualcomm Institute
University of California, San Diego
Title:
The advent of tiled ultra-narrow bezel commercial signage displays a few years ago dramatically altered the way information is shown at large scale. These video walls have become ubiquitous in advertising and in TV newsrooms, although they are employed as wall-sized, very bright HDTVs, not for display of big data. However, tiled video walls used as means to visualize big data coming over big networks have become integral to scientific communication, artistic performances, and exhibitions at universities. International video wall collaborations have been ongoing for several years. At UCSD, both 2D and stereo 3D walls are in use, displaying big data up to 64 megapixels resolution, as are the new generation of 4K UHD (Ultra-High-Definition) LCD displays. Specific effort has been invested to optimizing high-speed local and distance file serving and collaboration using multiple 10Gb/s and 40Gb/s networks and software tuned to synchronized image sharing (SAGE), extremely high-resolution static and streaming image viewing (MediaCommons), and immersive virtual reality experiences (CalVR), as well as to accurately handling with focused SoundBender audio speaker arrays and advanced echo cancellation. Recent results adapting flash memory big data technology championed by the San Diego Supercomputer Center to SSD-based "FIONA" PCs driving 2D/3D big data displays locally with up to 40Gb/s network interfaces, attached to 100Gb/s wide-area networks, along with their applications will be presented. Applications in omics and archaeology, are two UCSD examples with great international potential. The latest big displays at UCSD, the WAVE and WAVElet, and use of emerging UHDTV (4K) panels will also be described in detail.

Thomas A. DeFanti, PhD, is a research scientist at the Qualcomm Institute, a division of the California Institute for Telecommunications and Information Technology, University of California, San Diego, and a distinguished professor emeritus of Computer Science at the University of Illinois at Chicago (UIC). He is principal investigator of the NSF IRNC Program TransLight/StarLight project. He is recipient of the 1988 ACM Outstanding Contribution Award and was appointed an ACM Fellow in 1994. He shares recognition with fellow UIC professor emeritus Daniel J. Sandin for conceiving the CAVE virtual reality theater in 1991.
 

Universidad de la República
Multidisciplinary Center for High Performance Computing
Uruguay
Title:
Metaheuristics are high-level soft computing strategies that define algorithmic frameworks and techniques able to find approximate solutions for search, optimization, and machine learning problems. They are highly valuable techniques, which allow researchers and practitioners to meet realistic resolution delays in many fields of application, ranging from informatics (combinatorial optimization, bioinformatics, software engineering, etc.) to industrial and commercial (logistics, telecommunications, engineering, economics, etc.). Parallel models for metaheuristics have been conceived to enhance and speed up the search. By splitting the search workload into several computing elements, parallel metaheuristics allow reaching high quality results in a reasonable execution time even for hard-to-solve optimization problems. This talk introduces the main concepts about metaheuristics as problem solvers and provides a general view of the field of parallel implementations for metaheuristics, including the topics of implementation in new computing devices and supercomputing infrastructures, and also the main lines of application in real-world problems.

Sergio Nesmachnow is a Full Time Professor at Universidad de la República, Uruguay, with several teaching and research duties. He is Researcher at National Research and Innovation Agency (ANII) and National Program for the Development of Basic Sciences (PEDECIBA), Uruguay. His main research interests are scientific computing, high performance computing, and parallel metaheuristics applied to solve complex real-world problems. He holds a Ph.D. (2010) and a M.Sc. (2004) in Computer Science, and a degree in Engineering (2000) from Universidad de la República, Uruguay. He has published over 90 papers in international journals and conference proceedings. Currently, he works as Director of the Multidisciplinary Center for High Performance Computing (Universidad de la República, Uruguay) and as Editor-in-Chief for International Journal of Metaheuristics, while he is also Guest Editor in Cluster Computing and The Computer Journal. He also participates as speaker and member of several technical program committees of international conferences and is a reviewer for many journals and conferences. E-mail: sergion@fing.edu.uy, Webpage: www.fing.edu.uy/~sergion.
 

University of Notre Dame
Center for Research Computing
USA
Title:
Purdue's HUBzerois growing leaps and bounds these days, powering everything from simple collaborative websites to compute intensive portals such as nanoHUB. Does thismean it's a great fit for any science portal? When should HUBzero be used and when other tools are a better fit? In this talk I will present Notre Dame's Center for Research Computing experiences with building science gateways, where both, HUBzero and non-HUBzero solutions are used.

Jarek Nabrzyski is the director of the University of Notre Dame's Center for Research Computing. Before coming to Notre Dame Nabrzyski led the Louisiana State University's Center for Computation and Technology, and before that he was the scientific applications department manager at Poznan Supercomputing and Networking Center (PSNC), where he got interested in building science gateways and distributed computing middleware tools. Nabrzyski has built a team that developed and supported the GridSphereportlet framework, and later the VineToolkit framework. Both were used in many grid computing collaborations worldwide. While in Europe Nabrzyski was involved in more than twenty EC funded projects, including GridLab, CrossGrid, CoreGrid, GridCoord, QoSCoSGrid, Inteligrid and ACGT. Within his last five years at Notre Dame Nabrzyski has been focused on building the Center for Research Computing. The Center, a 40+ staff and faculty research enterprise has been involved in many research projects funded nationally and internationally. Nabrzyski has received his M.Sc. and Ph.D. in Computer Science and Engineering from the Poznan University of Technology in Poland. His research interests cover distributed resource management and scheduling, cloud computing, scientific portals, and decision support systems for global health and environmental applications.
 

Department of Computer Science and Technology
Tsinghua University
Title:
Dr. Zhihui DU is an associate professor in the Department of Computer Science and Technology at Tsinghua University. His principal research interests lie in the field of High Performance Computing and he participated in building several cluster systems, including one TOP 500 supercomputer. He has published more than 100 academic papers and authored/coauthored 5 books on parallel programming and grid computing. He has served on numerous program committees and is vice program chair for IPDPS, and associate editor for the Journal of Parallel and Distributed Systems and the International Journal of Parallel, Emergent and Distributed Systems.

Cloud computing, a key computing platform which can share resources that include infrastructures, platforms, software applications, and even everything, known as "X as a Service ", is shaping the cyber world. But the existing cloud computing is limited in addressing the problems which can only be modeled in the cyber world. There are more problems in the physical world and we need a bridge which can connect the cyber world with the physical world seamlessly. Based on the concept of robot as a service in cloud computing, we provide a design of robot cloud. In this talk, I will give the details of our design on robot cloud architecture and an application scenario to explain how to provide high quality service for users and obtain more benefit for the robot cloud center by optimizing the robot services.