Parallel Distributed Infrastructure for Minimization of Energy

News

June 18: Live Twitter chat “Less energy consumption in ICT”

Fri, 2015-05-22
http://www.paradime-project.eu/system/files/news/images/news-paradime.jpg
In the framework of the EU Sustainable Energy Week 2015, the ICT-Energy coordination action, where ParaDIME project belongs, will host a live Twitter chat to raise the awareness about energy consumption in ICT. How can dissipation heat be re-used? Can energy be harvested for reuse by computing devices? How can the computing stack be redesigned to use energy in smarter ways? How can we predict energy consumption by software applications? How can we ensure that the exascale supercomputers of the future will use energy sustainably? We’ll try to find some answers to these questions.
 
The Twitter chat will be hosted by the ICT-Energy Twitter channel (@ICTEnergy_EU) and the hashtag used will be #LessEnergyICT
 
We look forward to counting with your opinions and participation. Let’s meet on June 18, at 12:00 h CEST #LessEnergyICT !

Our partner Cloud&Heat awarded the prize Deutschen Rechenzentrumspreis 2015

Wed, 2015-05-13
http://www.paradime-project.eu/system/files/news/images/cloudandheat1.jpg
http://www.paradime-project.eu/system/files/news/images/cloudandheat2.jpg
Cloud&Heat, which offers a powerful and modern cloud technology, has been awarded the prize Deutschen Rechenzentrumspreis 2015. This German prize is intended to recognize the best data centers. Cloud&Heat has been awarded in the category “Newly built energy-efficient and resource-efficient data centers”.
 
Cloud&Heat offers a green cloud. The servers are exclusively located in Germany, powered with green electricity, and don’t require a cooling system. The byproduct heat generated by the servers is repurposed to heat buildings and drinking water. This process saves energy for heating and significantly lowers CO2 levels.
 
The Deutschen Rechenzentrumspreis 2015 was awarded during a festive gala on the occasion of Data Center Conference “Future Thinking” on April 20 in Darmstadt.
 

ParaDIME at EuroSys

Fri, 2015-04-24

The EuroSys conference focuses on systems research and development and was located in Bordeaux for this year's edition.

ParaDIME collaborator Mascha Kurpicz presented a paper with the title "Process-level power estimation in VM-based systems" discussing the middleware BitWatts for process-level power estimation on physical machines and in virtualized environments.

 

 

Please, find below the video of the presentation.

 

Ten minutes with... Dragomir Milojevic, IMEC

Fri, 2015-05-08

Dragomir Milojevic received his Ph. D. in Electrical Engineering from Université Libre de Bruxelles (ULB), Belgium. In 2004 he joined IMEC where he first worked on multi-processor and Network-on-Chip architectures for low-power multimedia systems. Nowadays, he is working on design methodologies and tools for technology aware design of 3D integrated circuits  as part of the INSITE programme. Dragomir is associate professor at Faculty of Applied Sciences, ULB, where he co-founded Parallel Architectures for Real-Time Systems — PARTS research group. He has authored or co-authored more than 75 journal and conference articles, and served as technical program committee member to several conferences in the field.

 

1. Can you tell me a bit about your main research interests? What led you to work in this field?
My current research interest focuses mainly on the design enablement of future integrated circuits using both advanced device and packaging technologies.  For advanced packing of the circuits we are looking into die and wafer level stacking of the circuits using 3D integration. The objective is to provide means to enable optimal system design using given integration technology.

 

At IMEC we develop process technologies to further enable the benefits of scaling in microelectronics industry. The recent change in the game (scaling wall) forced us to look further the simple CMOS scaling, a model that run for past 50 years but would eventually come to an end. That time is about to come, and we need to find new solutions to enable the extraordinary pace at which microelectronics industry has evolved over the past years. We believe that this is still possible (at least for mid-term developments) if we carefully design systems by co-optimizing the process technology and the system design.

 

2. What areas are you concentrating on within the ParaDIME project?
Two paths lie before us. The first one involves the usage of advanced devices (technologies that use physical dimensions to manufacture transistors that are below 14nm). We are researching how these devices could be used in non-nominal operating points to trade-off the power dissipation of the circuit with the accuracy of computation. 
 
The second path follows the advanced packaging and how this technology could be used to build heterogeneous systems that will save the power by adapting the process technology used to manufacture the ICs with the computation needs. The aim is to combine within the same circuit high-performance, with high-power dissipation CPUs together with low-power, with low-performance CPUs.
 
3. Why is it important for computers to be more energy efficient? What are the major technical challenges which need to be overcome to achieve this?
The overall increase in the computation needs of our society is tremendous and it isn’t likely to change due to the extraordinary progress it provides (despite some computational futilities we witness). On the other hand, energy resources are becoming scarce. Hence, we need to figure out how to enable more computation power -with orders of the magnitude- with lesser  energy required to actually perform these computations. 
 
4. What, for you, are the key technical challenges which need to be tackled in order to achieve more energy-efficient computing systems?
I strongly believe that a lot of progress could be made by much better co-design between the technology, the system and the software. The complexity of the systems, at both software and hardware level led to the adoption of the "divide and conquer" approach: the problem is broken into many smaller sub-problems that are manageable at each one's scale. The problem of such approach is that it necessarily introduces sub-optimality.  So this would be a problem of its own that needs to be solved. 
 
5. Is it possible to deliver genuine energy savings while achieving optimum performance?
Definitively! But I will say this differently: by sacrificing a little bit of performance, important energy savings could be achieved. This could be only a result of a very careful trade-off made across all the levels of the electronic system design: device, system and application.
 
6. Why is the ParaDIME project important? What do you think the most important results will be for society in general?
The ParaDIME project is important because we need, and we will need, more and more computation in the future. And the project is addressing this problem in an original way. 
 
Many of the contributions at one specific level could be applied to other domains as well. Advances in high-performance computing moves forward mobile computing, and vice versa.
 
7. What are your predictions regarding the future of information and technology systems, especially regarding energy consumption and innovative architectures?
In the mid-term, the scaling will continue and we will reach nodes in the range of the few nano-meters. The technology will become extremely expensive but, if carefully used, it could be still cost-effective for some time. We will also have to become more rational and scale only the parts of the systems that need to scale. This could be done using heterogeneous technology integration, and this is where the 3D could help enormously. 
 
In a long-term we need a serious paradigm change. CMOS technology will stop to scale at some point in time; there is no doubt about that (no man can run 100 in 5 sec). We hence need another technology base… optical, quantum computing, or who knows what. This is why this period is extremely interesting for the people working on this field!
 

ParaDIME at the 23rd IEEE International Symposium on Field-Programmable Custom Computing Machines

Thu, 2015-05-07
http://www.paradime-project.eu/system/files/news/images/oarcas-fccm.jpg
Researcher Oriol Arcas presented the ParaDIME poster "Heterogeneous Platform to Accelerate Compute Intensive Applications" at the 23rd IEEE International Symposium on Field-Programmable Custom Computing Machines which took place from the 3rd to the 5th of May in Vancouver (Canada).
 
This symposium is the original and premier forum for presenting and discussing new research related to computing that exploits the unique features and capabilities of FPGAs and other reconfigurable hardware. Over the past two decades, FCCM has been the place to present papers on architectures, tools, and programming models for field-programmable custom computing machines as well as applications that use such systems.
 
The work presented at Vancouver was a novel heterogeneous acceleration platform to accelerate computing-intensive applications like face recognition. Such platform not only could greatly improve the performance, but it also could reduce the energy consumption. This was possible thanks to the interconnection of two advanced system-on-chip accelerators: an NViDIA Tegra platform, which includes a CPU and a GPU, and a Xilinx ZYNQ platform, which combines a CPU and an FPGA. All the four components accelerated different parts of the algorithm, achieving better results than each component alone. 

Ten minutes with... Oscar Palomar, Barcelona Supercomputing Center

Thu, 2015-04-16

Oscar Palomar is a senior researcher in the Computer Architecture for Parallel Paradigms group at Barcelona Supercomputing Center (BSC). His research interests relate to vector and low-power computer architectures. In the ParaDIME project, he works closely with fellow BSC researchers Santhosh Rethinagiri and Ruben Titos, while the principal investigators are BSC’s Adrián Cristal and Osman Ünsal.

 

1. What are your research interests? What do you most enjoy researching?
My research is mainly in two related areas of computer architecture: vector and low-power architectures. Vector architectures have been around for a long time, but we are looking at them from a new perspective and for new types of application, such as databases. In scalar architectures, used in conventional processors, each instruction defines an operation, meaning that if you have to add two arrays of numbers you will have to add each pair of values sequentially, with the add instruction in a loop. Vector architectures allow adding the whole two arrays using only one instruction, which is more efficient due to multiple reasons, for example that the instruction only has to be read and the processor prepared for the operation once. 
 
When vector architectures were dominant in supercomputing, the most important constraint affecting their design was not power; computers were built to run as fast as possible, as cheaply as possible. Today, technology trends have made power a key issue. Vector architectures require a small increase in the amount of power required but they ensure that the operations go much faster and therefore represent a more energy-efficient computer design when vector operation is common in the workloads. We’re now seeing a return to vectors, with some designs, such as that of the Intel Xeon Phi, approaching vector architectures in the instructions they offer, although to my mind these could be made more efficient by using vector implementation as well as instructions.
 
2. What areas are you concentrating on within the ParaDIME project?
Within the ParaDIME project, we have published one paper on vectors, but I’ve mostly been working closely with BSC researchers Santhosh Rethinagiri and Ruben Titos on heterogeneous  and multi-core architectures. You can find out more about Santhosh’s work in the interview with him on this website. Ruben has been researching how to make inter-core communication more efficient. In a multicore processor (a chip with several processing units), there are two main approaches to communicating and exchanging data. The first is shared memory, where all the cores in the processor access a single memory address space, while in the second, each core accesses its own private memory address space and uses message passing to communicate that it is sending/receiving data to/from another core. 
 
One of the assumptions of the ParaDIME project is that message passing is more efficient than shared memory; however, most chip manufacturers implement shared-memory architectures and this situation looks likely to continue for multiple reasons. An important one is that most applications use shared memory. Ruben is therefore looking at techniques to improve the efficiency of message passing on shared-memory architectures and has proposed a way of avoiding redundant copies of data. This has two benefits: it improves performance and reduces energy consumption, as moving data requires high amounts of energy. 
 
3. How did you come to be a computer-science researcher? Have you always enjoyed computer science?
I suppose I first got interested in computer science when my parents bought a Spectrum computer when I was a kid, which I started to program and have fun with. At school I always liked science, particularly physics and maths, although I only remember having one programming class and I don’t think I got much out of it. I also remember a philosophy teacher who told us that if we didn’t understand his logic class, we should forget about studying computer science. We didn’t really understand his class – although I think that had more to do with his teaching than anything else – but I think this might have actually motivated me more. 
 
I went on to study computer science at Barcelona Tech (Universitat Politècnica de Catalunya) and realised that the topic which interested me most was computer architectures. When doing my final project on computer architectures a professor suggested that I do a PhD, which I went on to do at the same university. 
 
Things might have changed since I was at university, but one thing I felt was missing from the course at that time was a focus on power. I think it’s also really important for computer scientists to learn about different areas: programmers need some architecture awareness, for example, and vice versa.
 
4. Why is it important for computers to be more energy efficient? What are the major technical challenges which need to be overcome to achieve this?
Obviously the less energy computers use, the lower the energy costs, especially over the long term. Every time you switch on your computer you’re using energy, so if it were more energy efficient it would mean that if you use it over the next three years, the result would be three years’ worth of energy saving. 
 
For mobile devices, energy is the most important constraint, due to the need to reduce the number of times you have to charge the battery. Batteries are crucial, in fact – we need batteries which last longer and are faster to charge, and/or have a charging system in the background – but these are out of the hands of computer scientists; all we can do is improve the energy efficiency of devices. 
 
Heterogeneous architectures are definitely the way to go to achieve energy savings: now we need to work out what they will look like and how to make them usable for programmers, so that they don’t need to have in-depth knowledge of hardware to program them. At the moment, for example, we don’t have enough compiler support  – computer programs that transform source code written in a programming language into instructions that can be executed by the computer– for vectors. This means that we have to write low-level code to use directly the vector instructions. Using other accelerators as GPGPUs or FPGAs is also non-trivial and demanding for the programmer.
 
5. What have you learned from working with other researchers on European projects? Do you think it’s a productive experience, despite cultural and linguistic differences?
Working with researchers from other institutions has helped me get perspective on where our area of research lies in the hardware/software development chain: for the researchers at Neuchâtel, for example, BSC works at the low-level end, whereas for IMEC we are more high level. Also, as we’re experimenting with new ideas which don’t currently exist on any chips, at BSC we’re using simulations to try out our results, whereas at Neuchâtel and Technische Universität Dresden they are using real hardware. This means that our timescales and research methods are significantly different (thousands of times slower), and that we have to work carefully together to ensure that the results are meaningful.
 
6. How is ParaDIME different from other projects focusing on energy-efficient computing, such as Mont-Blanc? What would a sequel to the ParaDIME project look like?
Like Mont-Blanc, ParaDIME is also looking at small, ARM-based, energy-efficient cores, although we are looking at more heterogeneous architectures. However, Mont-Blanc aims to build a supercomputer prototype, whereas ParaDIME’s research is focused more on data centres
 
ParaDIME uses the Scala and AKKA programming models, which are not intended for supercomputing. These are examples of actor models; that is, inherently concurrent models which assume that everything is an actor which can make local decisions, create more actors, send messages to other actors, and determine how to respond to the next message received. The next steps which ParaDIME could take would be to integrate different elements by getting an actor model to use efficient support for message passing and use heterogeneous accelerators to improve the implementation of the model. 
 
7. What do you think that BSC, and perhaps even Catalonia more generally, bring to this area of research?
There is a tradition of researching architectures, and specifically vector architectures, at BSC and Barcelona Tech. BSC Director Mateo Valero has published highly influential papers on vector architectures, so I think that when people is interested in vector architectures, they consider BSC a reference. As for Catalonia, I think there’s a tradition of critical thinking (critical perhaps being the operative word) which is useful when it comes to research. 
 
8. What are your predictions regarding the future of information and technology systems, especially regarding energy consumption and innovative architectures?
I think it’s dangerous to start making predictions, but I think we can safely say there will be a lot more connected devices and many of these will be working more autonomously. For that to work, and for smart cities to be really feasible, we will need large-scale data centres to process the data. We will also need very energy-efficient, small devices to process as much data as possible locally for things such as smart traffic distribution. This means decisions made locally but with global processing; for both of these areas, energy efficiency is of key importance. 

ParaDIME at COOL Chips XVIII

Tue, 2015-04-14
http://www.paradime-project.eu/system/files/news/images/1.jpg
http://www.paradime-project.eu/system/files/news/images/2.jpg

Researcher Santhosh Kumar Rethinagiri presented "An Energy Efficient Hybrid FPGA-GPU based Embedded Platform to Accelerate Face Recognition Application" in the Session "Object recognition techniques" held at IEEE Symposium on Low-Power and High-Speed Chips COOL Chips XVIII at Yokohama (Japan). S. K. Rethinagiri asserts that "heterogeneous computing is the way to tackle dark silicon problem".

GreenDays@Toulouse

Mon, 2015-03-16

 Green Days @ Toulouse 

 

ParaDIME collaborator Mascha Kurpicz from the University of Neuchâtel presented current research at the GreenDays in Toulouse with the title "Power characterization of servers in heterogeneous cloud environments". 

The GreenDays are a regular event of the french-speaking community working on Green IT and took place this year in Toulouse.

 

Call For Paper: The 2nd ICT-Energy International Doctoral Symposium (ICT-EIDS)

Tue, 2015-03-31

The 2nd ICT-Energy International Doctoral Symposium

                                            September 16, Bristol, UK                                                 

 

Call for Paper:

The doctoral symposium is a part of ICT-Energy workshop (2015). This symposium is a great opportunity for PhD students to present their thesis work to a broad audience in the ICT community from both industry and academia. The forum may also help students to establish contacts for entering into research related to minimization of energy from various layers. In addition, representatives from industry and academia get a glance of state-of-the-art in the energy minimization field.  Topics of interest include, but are not limited to:

- Power- and thermal-aware algorithms, software and hardware

- Low-power electronics and systems

- Power-efficient multi/many-core chip design

- Sensing and monitoring

- Power and thermal behavior and control

- Data centers optimization

- Smart grid and microgrids

- Power-efficient delivery and cooling

- Reliability, life-cycle analysis of IT equipment

- Renewable energy models and prediction

- Matching energy supply and demand

- Smart transportation and electric vehicles

- Smart buildings and urban computing

- Energy harvesting, storage, and recycling

 

Eligibility

The following two classes of students are eligible: students who are close to finishing or second year in their Ph.D thesis work and post-doctoral candidates.

 

Benefits

  • A presentation (25 mins) at the Doctoral Forum
  • Contacts to professionals from industry and academia
  • Possibility to publish the extended abstract in the ICT-Energy newsletters and website

Submission

Submissions need to contain

  • The full contact address, with affiliation, phone, e-mail
  • A 2-pages extended abstract describing the novelties and advantages of the thesis work of not more than 2400 words (PDF). The abstract should also include name and affiliation. Figures may be included as far as the 2-page limit is not exceeded.
  •  ICT-Energy Newsletters formatting instruction and template can be downloaded from the link given in this page below.

Important dates

 

- Submission deadline: July 24th, 2015 (by email) (EXTENDED)

- Notification of acceptance: Aug 3th, 2015 (by email)

- Presentation at PhD Forum: September 16th, 2015

 

Submit this material to the emails given below:

 Contacts

Dr. Adrian Cristal

Nexus I - Planta 3

C/ GRAN CAPITA, 2-4

BARCELONA

08034

Email: adrian.cristal@bsc.es

 

Dr. Santhosh Kumar Rethinagiri

Nexus I - Planta 3

C/ GRAN CAPITA, 2-4

BARCELONA

08034

E-mail: santhosh.rethinagiri@bsc.es

 

 

Call for Posters:

You are also invited to submit proposals for poster sessions by e-mail, to Santhosh Kumar Rethinagiri, Poster Chair: santhosh.rethinagiri@bsc.es

Extended abstract submissions should be ONE page, formatted using the paper proceeding template given in the webpage. They are refereed primarily based on their relevance to the conference. Accepted abstracts will be published in the Proceedings of the ICT-Energy newsletters. All participants in this track will have an opportunity to present a poster and a short talk. Submission of a paper to the track signifies an agreement to have one author present the work at the conference. 

 

Author Schedule (Poster):

 

  • AUG 15th - Poster Abstract Submission (by e-mail)
  • AUG 20th - Poster Abstract Submission (by e-mail)

 

Chairs

Adrian Cristal (Barcelona Supercomputing Center)

 

Poster Chair
Santhosh Kumar Rethinagiri (Barcelona Supercomputing Center)
 

Committee Members

Osman Unsal (Barcelona Supercomputing Center)

Kerstin Eder (University of Bristol)

Oscar Palomar (Barcelona Supercomputing Center)

John Gallagher (Roskilde University)

Javier Arias (Barcelona Supercomputing Center)

Luca Gammaitoni (NIPS - University of Perugia)

Hossein Mamaghanian (EPFL)

Douglas J Paul (University of Glasgow)

Giorgos Fagas (Tyndall National Institute - University College Cork)

Ten minutes with...Santhosh Rethinagiri, Barcelona Supercomputing Center

Fri, 2015-03-13

Santhosh Rethinagiri

Santhosh Rethinagiri is a senior researcher with the Microsoft research group in Barcelona Supercomputing Center (BSC). His research interests involve minimisation of energy for data centers, power reduction for supercomputers with mobile computing chips and a FPGA-based acceleration for databases. Fellow BSC researcher Oscar Palomar also participates in the ParaDIME project, while the principal investigators are BSC’s Adrián Cristal and Osman Ünsal.

 

What’s your research background? How did you come to be working in this field?

I did my B.Eng in electronics and instrumentation before specialising in embedded systems for my MS in electrical engineering. After that, I spent time working for the electronic system-level (ESL) company Synopsys, where I got to know about system research, platform architecture in general and the products which they offer in particular. My PhD, which I studied at Inria, in France, focused on developing tools to estimate power for applications and systems used by various companies, including Thales, Inpixel, Inria and STMicroelectronics.

What are your current research interests?

My work is mostly in power estimation and optimisation across every step of the computing system, from hardware to applications. I undertake heterogeneous prototyping with field-programmable gate arrays (FPGAs), central processing units (CPUs) and graphic processing units (GPUs). This involves running real applications and trying to optimise them for different devices, aiming to use the advantages of the three different devices to improve energy efficiency and accelerate applications. I am also working at the device level for hybrid architectures, using complementary metal-oxide-semiconductors (CMOS).

In terms of software, I am working on annotated power saving. A piece of software will specify that an application should be executed on a specific frequency, but some applications don’t require that much power so you can use automatic workload specification to intelligently distribute power.

Why should we be working towards more energy-efficient computing systems? What can we do to make them more efficient?

Currently, data centres are not operating fully: up to 90% of their processors may be idle at any one time. This leads to enormous energy costs.

One way to reduce the power consumption is to replace powerful processors with embedded ones, gaining a productive trade-off of energy versus performance. The Mont-Blanc project at BSC, for example, is creating supercomputer prototypes where high-performance processors are replaced with ARM-based chips. Where software is concerned, annotating the sections which are critical and therefore need high performance and those which need less power can help reduce the power consumption, as mentioned above.  

What, for you, are the main technical challenges which should be tackled in order to achieve these?

For me the key thing is that every part of the design of computing systems needs to be addressed: improved architecture should be complemented by software and device design. As no one person can master every element, this means that engineers from different areas need to work together, and this is what the ParaDIME project is all about.

On what areas are you concentrating in the ParaDIME project?

Within the project I concentrate on architecture. My first task was to develop power models for different kinds of processor and automatically evaluating these models. Next, I’ve been building heterogeneous platforms consisting of the three different processor types listed above, both high-performance and embedded systems. These will define how future architectures are designed.   

We are also working on a proposal for a hybrid device combining two different types of processor, such as CMOS and tunnel field-effect transistors. We’ve been modelling designs with the help of a Belgian research institute, the Interuniversitair Micro-Electronica Centrum (IMEC).

We’re lucky to have a cooperative working environment in the ParaDIME project, as well as strong project management, which means that everything is well coordinated.

What do you think BSC’s unique contribution to the ParaDIME project is?

As the coordinator, BSC is the heart of the project, both in terms of project coordination and the new research it is producing on different topics. The fact that there is a supercomputer on site at BSC means that researchers can experiment with their own prototypes. There is also expertise in every field at the centre and researchers are very approachable. We swap advice and share code with researchers from the Mont-Blanc project –we’ve asked for their input about their prototypes, for example, which has influenced our development of a data centre prototype. We also work with the programming models group: their programming models can be accessed for FPGA and GPU communication.

Why is the ParaDIME project important? What do you think the most important results will be for society in general?

ParaDIME will provide new roadmaps for data-centre systems, covering programming models, runtime models, new devices (both in the near and distant future). Testing with prototypes gives us the assurance that these will work. The project won’t change the face of data centres, but at least it will provide one of the solutions contributing to energy-aware computing.

Why is it important for ParaDIME to participate in the ICT-Energy project?

In ICT-Energy, a consortium of projects related to energy, the projects start from basic physics and go all the way up to data centres and modern supercomputers. ParaDIME is the only one working on all aspects from the device to data-centre level, so acts as a kind of middleman for the other projects, and the other projects give their inputs on how to utilise small architectures.

Green IT: A computer as a furnace – NZZ article on ParaDIME and partners

Mon, 2015-03-02

Why should we put a lot of effort into air conditioning data centers, when the waste heat is needed elsewhere? Responding to this question, a recent article in the Swiss newspaper Neue Zürcher Zeitung (NZZ) points to the potential energy savings which can be made when creating distributed data centers which respond to the heating demands of private homes, apartments and offices, instead of building immense server farms with their known disadvantages.

The article describes the efforts of scientific programs and companies to find a solution for this environmentally friendly approach. ParaDIME consortium members Cloud & Heat Technologies, Université de Neuchâtel and Technische Universität Dresden are mentioned as having some of the most advanced technical networks. You can read the full article (in German) on the NZZ website.

Cloud & Heat is currently working on the task of intelligent scheduling of cloud jobs based on heat demands from our already distributed hardware.

 

Exploring energy-efficient transactional memory at HiPEAC 2015

Thu, 2015-02-19

Osman Ünsal presenting a keynote on energy-efficient transactional memory during HiPEAC 2015

 

Energy-efficient transactional memory was the topic of a presentation by ParaDIME researcher Osman Ünsal (Barcelona Supercomputing Center - BSC) at the final Euro-TM workshop, co-located with the 2015 HiPEAC conference in Amsterdam on 19 January 2015.

Transactional memories offer an alternative programming model which may simplify the development and testing of concurrent programs, enhance code reliability and boost productivity. Academic research in the field has recently been followed by interest in the commercial sector: processors with transactional memory support have recently become available (IBM BGQ, IBM Z-series, IBM Power8, Intel Haswell), making it possible to measure and quantify energy-related aspects.

The presentation provided the research background before exploring energy-efficient hardware transactional memory and research on error detection and transactional memory for energy-efficient computing below safe operation margins.  

In addition, along with BSC researchers Adrián Cristal and Gülay Yalçın and other experts in the field from across Europe, Dr Ünsal has also contributed to the final Euro-TM publication, Transactional Memory. Foundations, Algorithms, Tools, and Applications on the topic of reliability and transactional memory.

The keynote is available for download below.

For further information on the Transactional Memory publication, visit the Euro-TM website.

Ten minutes with…Christof Fetzer, Technische Universität Dresden

Wed, 2015-01-14

Christof Fetzer

Christof Fetzer holds an endowed chair (Heinz-Nixdorf endowment) in Systems Engineering in the Computer Science Department at Technische Universität Dresden (TUD), as well as being chair of the Distributed Systems Engineering International Master’s Programme. His PhD students Thomas Knauth and Lenar Yazdanov also work on the ParaDIME project.

 

Can you tell me a bit about your main research interests? What led you to work in this field?

My research interests include cloud computing, dependability, security and energy efficiency, partly because I supervise several PhD students doing different things. I take on new research problems which interest me personally – so a few years ago I thought there were interesting problems in cloud computing. For example, as a cloud customer, how can I trust that confidentiality is ensured for my data and its computation? How do we ensure the integrity of the data and its availability?

Cloud computing has some great advantages with regard to energy efficiency because the shared infrastructure means that you can provide infrastructure for peak load. Computers are more fully utilised and are therefore more energy efficient.

There are varying daily patterns, with some periods experiencing high loads and others low loads. Most data centres don’t switch off their machines; as a result, a 2012 study by the New York Times found that data centres can waste 90% of the electricity they take from the grid, as they use only a small percentage of the electricity powering their servers to perform computations and the rest to keep servers idling – and servers are idle for about 90% of the time. You could therefore potentially achieve energy savings in the region of an order of magnitude if the servers were utilised 80% of the time. One way of doing this would be to consolidate the load, moving all the computation onto a few machines and switching off the other machines. This presents significant challenges, as moving applications too often should be avoided.

 

What does Dresden bring in particular to the ParaDIME project?

TUD has two large projects which are complementary to ParaDIME. The first is a cluster which forms part of the German excellence initiative, the Centre for Advancing Electronics. In this cluster we look at resilient computing, and one of the topics we consider is how we can lower the energy consumption of computers to a point where there might be errors in the computation, so that we can try to detect them, correct them and from there aim not to introduce them in the first place. We are also investigating new technology, material and devices, such as carbon nanotubes for integrated circuits, which might provide better energy efficiency.

The other major project is SG Labs Germany, which researches the next generation of wireless networks which will replace long-term evolution (LTE) networks. In this project, we are researching edge clouds to see how we can distribute computing such that we reduce the latency in order to communicate/compute within less than a millisecond. Applying this in the area of the tactile internet, for example, providing a low response time and give users immediate feedback so they would not notice any latency, would allow the creation of a range of new applications, such as in the domains of health or music.

 

What, for you, are the most compelling reasons why we should create more energy-efficient computing systems?

One reason is the cost of computing, as energy consumption contributes to the total cost of ownership. Another reason is that if computers are more energy efficient you could pack them closer together, thereby achieving a higher compute density in the data centre and reducing the space required for the data centre.

Ecological reasons are also important: the total electricity consumption by the ICT infrastructure is greater than that of India. It therefore makes a lot of sense from an ecological perspective to increase energy efficiency. 

 

What, for you, are the key technical challenges which need to be tackled in order to achieve more energy-efficient computing systems?

One of the ways to save energy in data centres is to switch off some of the machines when they are not needed. However, this poses technical problems as the machines might not come back on when switched on, so technicians would theoretically need to be available in case of any issues.

Storage is usually attached directly to the computer, as there is higher throughput when you attach solid-state drives directly to the computing nodes. If you switch off the machine, the storage attached to it is lost. We need to find a solution which would allow us to support directly attached storage and still be able to switch off machines.

 

Is it possible to deliver genuine energy savings while achieving optimum performance?

It depends what you mean by optimum performance. As computers running at maximum capacity are more energy efficient, we need to keep them at a high level of utilisation. The most efficient algorithms should also be the most energy efficient, but in parallel computing you often want to reduce the runtime of an application, which you do by parallelisation. However, you almost never get linear increases in speed. This means you pay a price in terms of energy in order to achieve shorter runtimes, so the throughput per server is lower than in the case of sequential programming.

So what we would have to do is to maximise the throughput per server of an application and not minimise the runtime of the application. In so doing I think we can achieve genuine energy savings for batch jobs, but this might not be the case for interactive jobs, where you need to get a response quickly.

 

What are the main improvements which you would like to see in the computing systems of the future?

What I really want to see is decentralised computing infrastructure that will compute at the edges of the internet. That would have a positive impact on energy efficiency as less data would have to be transferred and therefore the energy consumption of the network would be reduced. If you keep computation local you can achieve much higher energy efficiency in comparison to a centralised system where you have to transfer data across Europe, for example, and back again.

 

What will the lasting impact of the ParaDIME project be?

Edge computing will allow devices to become more ‘intelligent’: with traffic systems, for example, you could offload some of the intelligence of controlling the cars and routing the traffic. To optimise the energy consumption of cars and schedule routes you would need intelligent computational infrastructure which we don’t have today but could have in the future. Cars could therefore be more connected and autonomous and could interact with the system around them, which could lead to more energy-efficient transportation.

ParaDIME at Middleware Poster session

Wed, 2014-12-10

conferencelogo

At the Middleware conference, Mascha Kurpicz and Anita Sobe from UNINE presented a poster on "BitWatts: A Process-level Power Monitoring Middleware". This work was done in collaboration with INRIA Lille (in the picture with Maxime Colmant).

ParaDIME member presents workshop paper at CrossCloud Brokers'14

Mon, 2014-12-08

conferencelogo

CrossCloud Brokers'14 is a workshop co-located with the Middleware 2014 conference in Bordeaux, France in the beginning of December of 2014. The workshop explores the issues around multi-cloud and federated cloud systems. One session targeted decision support and ParaDIME collaborator Mascha Kurpicz presented a paper with the title "Using Power Measurements as a Basis for Workload Placement in Heterogeneous Multi-Cloud Environments" that proposes power consumption as a scheduling decision metric in heterogeneous data centers.

picture presenter