Parallel Distributed Infrastructure for Minimization of Energy

Nuria.Masdeu's blog

ParaDIME project on Science Node

Wed, 2016-04-13

 

An article about the ParaDIME project has been published on Science Node website. The Science Node is a free online publication, jointly funded by organizations in the US and Europe.

 

Please, find the article by following this link A new ParaDIME for energy-efficient computing or by downloading the PDF file here below.


ParaDIME on the Workshop on the Future Energy in ICT Research Agenda

Mon, 2015-09-21
The Workshop on the Future Energy in ICT Research Agenda brought together in Bristol on the 15th of September academic and industrial leaders to focus on the challenge of energy consumption in ICT. The main workshop event featured presentations on current challenges and research in low energy of ICT from fundamental physics to HPC, and provided networking opportunities to discuss where attention should be focused.
 
ParaDIME partner Dr. Jens Struckmeier from Cloud&Heat presented the work of setting up a decentralized green data center for cloud computing. Introducing the concept and challenges of reusing the computational heat for heat purposes in houses and apartment complexes. Dr. Struckmeier also pointed out why decentralized datacenter allow for shorter response times and enable "real time" cloud applications required for future technologies (industry 4.0, autonomous driving, virtual reality, and the tactile internet).
 

 
After the talk there was a vital question and answer session showing the interest in the technology. The data of the other speakers (i.e. Prof. Paul Douglas, University of Glasgow and Boris Grot, University of Edinburgh) allow the conclusion that despite all the very successful efforts in reducing energy consumption of ICT the overall energy consumption of ICT will continue to grow because of the extreme increase of volume of data and data processing outgrowing the efficiency efforts. Therefore the concept of reusing the energy is complementary to all energy saving efforts and not contradictory.
 
Cloud&Heat provides a distributed Infrastructure-as-a-Service cloud based on servers that are located in eco-friendly residential or commercial buildings, powered by OpenStack. The set-up is straightforward: servers, in self-contained fireproof cabinets, are installed in the basements of private and commercial buildings. The cloud-heaters are connected via broadband internet fibre cable connections. To allay concerns about housing the servers in this new way, data is decentralized, triple-replicated and encrypted. The company offers single-rack installments with normal broadband connections and larger installations with connections to multiple backbones for redundancy. Between one and six cabinets (“data safe”) form an individual OpenStack deployment and they all share the same authentication through one OpenStack identity (code-named Keystone) service.
 
The business model rests on a foundation of energy efficiency, competitive pricing and data protection that meets strict German and European standards. The model is two-pronged: one set of customers have a cost-effective cloud and the other save on cost-efficient heating. Cutting costs on the cooling of servers while providing heat for homes is proving energy efficiency that could never be achieved with classical centralized data center. Our solution enables a PUE of 1.06 plus the energy is reused for heating houses which further reduces the CO2 footprint. We developed a concept to match server workload with heating demand inventing the concept of briques as computational and heating units.
 
Cloud&Heat (former AoTerra) has been awarded for its innovations i.e. 2013 with the Saxon Environmental Award, was finalist in the German Industry Innovation Award and won the German data center price (Deutscher Rechenzentrumspreis) for energy efficient data center in 2015.
 
In this talk Cloud&Heat gave a brief overview over their solution of a hybrid water and air cooled microdatacenter for cloud applications. And also will introduced the attendees to challenges and solutions of matching cloud demand to the heating and warm water demand of the buildings.

A new paper presented in PPPJ 2015

Mon, 2015-09-14
http://www.paradime-project.eu/system/files/news/images/pppj15.jpg
Sebastian Ertel presented the paper "Ohua: Implicit Dataflow Programming for Concurrent Systems", authored by Sebastian Ertel, Christof Fetzer and Pascal Felber, at the PPPJ 2015, held from 8 to 10 September in Melbourne (Florida, USA).
 
2015 International Conference on Principles and Practices of Programming on the Java Platform: virtual machines, languages, and tools (PPPJ’15) – the 12th conference in the PPPJ series – provides a forum for researchers, practitioners, and educators to present and discuss novel results on all aspects of programming on the Java platform including virtual machines, languages, tools, methods, frameworks, libraries, case studies, and experience reports.

ParaDIME at the ICT Energy Summer School

Mon, 2015-07-20
The 6th edition of the NiPS Summer School took place in Fiuggi (Italy) from 7 to 12 July. This year's edition was entitled "ICT-Energy: Energy consumption in future ICT devices" and it was aimed at teaching the bases of the science of efficient ICT through 4 thematic groups of lectures:
  • Basic on the physics of energy transformations at micro and nanoscales
  • Introduction to energy harvesting and distributed autonomous mobile devices
  • Software and energy aware computing
  • High performance computing and systems

ParaDIME researcher Santhosh Rethinagiri presented two out of four High performance computing systems sessions: "Introduction to data-centers" and "Tools and methodologies for energy-aware data-centers".  

The presentations can be found in the website of the Summer School. Please, visit the ICT-Energy website, Facebook page and Twitter timeline to find out more information. A Facebook group from the Summer School was also created.






 

ParaDIME project is member of the Consortium of the coordination activity ICT-Energy.

Participating in the live Twitter chat "Less energy consumption in ICT"

Mon, 2015-06-22

ICT-Energy, a coordinated activity where ParaDIME project belongs, hosted a live Twitter chat on the 18th of June in the framework of the Sustainable Energy Week and the Micro-Energy Day. The Twitter chat could be followed by the hashtag #LessEnergyICT. Below these line you can read the Storify of the live chat.

 

 

 

 

During the week, ParaDIME partner Barcelona Supercomputing Center offered information to the visitors to the MareNostrum supercomputer about micro-energy. Micro-energy refers to the energy that is often disregarded as unimportant but actually plays a significant role in our daily life. As an example… when you run out of battery in your mobile phone and really need to make that call!!! The amount of energy involved in this case is really very small compared to the energy required to drive a car, but you definitely notice when it's not there…

 

ParaDIME presents a paper about memory management at ISCA Symposium

Mon, 2015-06-15
This week, one ParaDIME paper about memory management will be presented in the 42nd International Symposium on Computer Architecture (ISCA). ISCA is the premier forum for new ideas and experimental results in computer architecture and this year is held in Portland, Oregon from 13 to 17 of June.
 
The paper is titled “Redundant Memory Mappings for Fast Access to Large Memories” and it has been written by Vasileios Karakostas (from BSC/UPC) together with researchers of the University of Wisconsin-Madison, Microsoft Research and BSC. This paper suggests a hardware/software co-design that leverages ranges of pages to reduce the overhead of virtual memory.
 
Please, feel free to check out our Publications section to find this and other papers and scientific publications.

10 minutes with... Malte Schneegass, Cloud&Heat

Wed, 2015-06-10
Malte Schneegass received his engineering degree in computer and automation technology  from the University of Applied Science, Dresden (Germany). In 2012 he joined Cloud&Heat Technologies, a provider of cloud-based computing services, with the waste heat produced by the servers also being used to heat buildings and water. In May 2015, Cloud&Heat received the award Deutschen Rechenzentrumspreis 2015 in the category “Newly built energy-efficient and resource-efficient data centers”.
 
1. Could you explain us why the Cloud&Heat has a different approach to the cloud computing services?
Cloud&Heat is a distributed data center which combines the businesses of a cloud services provider and providing heating devices for residential and industrial buildings. The cloud servers are installed directly in the properties to be heated. Multiple smaller data centers are connected via the internet to form a virtual data center. Cloud users can take profit from this decentralized infrastructure by programming fail over mechanisms from one location to another and build up reliable and highly available services. 
 
Nowadays, our efforts are dedicated to manage and maintain the distributed data center. We are focused on developing the product, from the server components to the heating system, as well as the connectivity and scalability.
 
On the other hand, we are really happy with the feedback received from the heat market, which has demonstrated strong interest in our product.
 
2. How can the scientific research led to products or how can it reach the industry?
It’s not easy to reach the industry with research results. Usually there is still a long way to go to make them economically valuable for businesses. In our case, for example, we still need to continue our research and devote a lot of efforts to make a product that meets the industrial needs.  We are constantly improving our product and are able to establish a new innovative product in the market.
 
3. What led you to work in this field? What do you like about it?
I studied automation of systems and processes. My current work is about automating a highly distributed heating system that uses cloud computing as fuel. Before implementation, conceptional work with models and simulation was necessary. Afterwards, we shaped the plan with a lot of interdisciplinary work, computer science and HVAC engineering/control loops. It is thrilling to see how the different technologies create the synergy effect. I like to work on something that makes our daily life smarter and more sustainable at the same time. 
 
4. What are the key technical challenges which need to be tackled in order to achieve more energy-efficient computing systems?
With the installation of distributed data centers we have done a great step forward. The next step is to make it as energy-efficient as possible: this is something we already do in the ParaDime project by implementing and improving smart-scheduling algorithms in our data centres. However, the computer architecture and server architecture based on electronics loose heat. Either we are able to re-use this energy if not it is wasted heat that could be used for free warm water to shower or for heating purposes.

June 18: Live Twitter chat “Less energy consumption in ICT”

Fri, 2015-05-22
http://www.paradime-project.eu/system/files/news/images/news-paradime.jpg
In the framework of the EU Sustainable Energy Week 2015, the ICT-Energy coordination action, where ParaDIME project belongs, will host a live Twitter chat to raise the awareness about energy consumption in ICT. How can dissipation heat be re-used? Can energy be harvested for reuse by computing devices? How can the computing stack be redesigned to use energy in smarter ways? How can we predict energy consumption by software applications? How can we ensure that the exascale supercomputers of the future will use energy sustainably? We’ll try to find some answers to these questions.
 
The Twitter chat will be hosted by the ICT-Energy Twitter channel (@ICTEnergy_EU) and the hashtag used will be #LessEnergyICT
 
We look forward to counting with your opinions and participation. Let’s meet on June 18, at 12:00 h CEST #LessEnergyICT !

Our partner Cloud&Heat awarded the prize Deutschen Rechenzentrumspreis 2015

Wed, 2015-05-13
http://www.paradime-project.eu/system/files/news/images/cloudandheat1.jpg
http://www.paradime-project.eu/system/files/news/images/cloudandheat2.jpg
Cloud&Heat, which offers a powerful and modern cloud technology, has been awarded the prize Deutschen Rechenzentrumspreis 2015. This German prize is intended to recognize the best data centers. Cloud&Heat has been awarded in the category “Newly built energy-efficient and resource-efficient data centers”.
 
Cloud&Heat offers a green cloud. The servers are exclusively located in Germany, powered with green electricity, and don’t require a cooling system. The byproduct heat generated by the servers is repurposed to heat buildings and drinking water. This process saves energy for heating and significantly lowers CO2 levels.
 
The Deutschen Rechenzentrumspreis 2015 was awarded during a festive gala on the occasion of Data Center Conference “Future Thinking” on April 20 in Darmstadt.
 

Ten minutes with... Dragomir Milojevic, IMEC

Fri, 2015-05-08

Dragomir Milojevic received his Ph. D. in Electrical Engineering from Université Libre de Bruxelles (ULB), Belgium. In 2004 he joined IMEC where he first worked on multi-processor and Network-on-Chip architectures for low-power multimedia systems. Nowadays, he is working on design methodologies and tools for technology aware design of 3D integrated circuits  as part of the INSITE programme. Dragomir is associate professor at Faculty of Applied Sciences, ULB, where he co-founded Parallel Architectures for Real-Time Systems — PARTS research group. He has authored or co-authored more than 75 journal and conference articles, and served as technical program committee member to several conferences in the field.

 

1. Can you tell me a bit about your main research interests? What led you to work in this field?
My current research interest focuses mainly on the design enablement of future integrated circuits using both advanced device and packaging technologies.  For advanced packing of the circuits we are looking into die and wafer level stacking of the circuits using 3D integration. The objective is to provide means to enable optimal system design using given integration technology.

 

At IMEC we develop process technologies to further enable the benefits of scaling in microelectronics industry. The recent change in the game (scaling wall) forced us to look further the simple CMOS scaling, a model that run for past 50 years but would eventually come to an end. That time is about to come, and we need to find new solutions to enable the extraordinary pace at which microelectronics industry has evolved over the past years. We believe that this is still possible (at least for mid-term developments) if we carefully design systems by co-optimizing the process technology and the system design.

 

2. What areas are you concentrating on within the ParaDIME project?
Two paths lie before us. The first one involves the usage of advanced devices (technologies that use physical dimensions to manufacture transistors that are below 14nm). We are researching how these devices could be used in non-nominal operating points to trade-off the power dissipation of the circuit with the accuracy of computation. 
 
The second path follows the advanced packaging and how this technology could be used to build heterogeneous systems that will save the power by adapting the process technology used to manufacture the ICs with the computation needs. The aim is to combine within the same circuit high-performance, with high-power dissipation CPUs together with low-power, with low-performance CPUs.
 
3. Why is it important for computers to be more energy efficient? What are the major technical challenges which need to be overcome to achieve this?
The overall increase in the computation needs of our society is tremendous and it isn’t likely to change due to the extraordinary progress it provides (despite some computational futilities we witness). On the other hand, energy resources are becoming scarce. Hence, we need to figure out how to enable more computation power -with orders of the magnitude- with lesser  energy required to actually perform these computations. 
 
4. What, for you, are the key technical challenges which need to be tackled in order to achieve more energy-efficient computing systems?
I strongly believe that a lot of progress could be made by much better co-design between the technology, the system and the software. The complexity of the systems, at both software and hardware level led to the adoption of the "divide and conquer" approach: the problem is broken into many smaller sub-problems that are manageable at each one's scale. The problem of such approach is that it necessarily introduces sub-optimality.  So this would be a problem of its own that needs to be solved. 
 
5. Is it possible to deliver genuine energy savings while achieving optimum performance?
Definitively! But I will say this differently: by sacrificing a little bit of performance, important energy savings could be achieved. This could be only a result of a very careful trade-off made across all the levels of the electronic system design: device, system and application.
 
6. Why is the ParaDIME project important? What do you think the most important results will be for society in general?
The ParaDIME project is important because we need, and we will need, more and more computation in the future. And the project is addressing this problem in an original way. 
 
Many of the contributions at one specific level could be applied to other domains as well. Advances in high-performance computing moves forward mobile computing, and vice versa.
 
7. What are your predictions regarding the future of information and technology systems, especially regarding energy consumption and innovative architectures?
In the mid-term, the scaling will continue and we will reach nodes in the range of the few nano-meters. The technology will become extremely expensive but, if carefully used, it could be still cost-effective for some time. We will also have to become more rational and scale only the parts of the systems that need to scale. This could be done using heterogeneous technology integration, and this is where the 3D could help enormously. 
 
In a long-term we need a serious paradigm change. CMOS technology will stop to scale at some point in time; there is no doubt about that (no man can run 100 in 5 sec). We hence need another technology base… optical, quantum computing, or who knows what. This is why this period is extremely interesting for the people working on this field!
 

ParaDIME at the 23rd IEEE International Symposium on Field-Programmable Custom Computing Machines

Thu, 2015-05-07
http://www.paradime-project.eu/system/files/news/images/oarcas-fccm.jpg
Researcher Oriol Arcas presented the ParaDIME poster "Heterogeneous Platform to Accelerate Compute Intensive Applications" at the 23rd IEEE International Symposium on Field-Programmable Custom Computing Machines which took place from the 3rd to the 5th of May in Vancouver (Canada).
 
This symposium is the original and premier forum for presenting and discussing new research related to computing that exploits the unique features and capabilities of FPGAs and other reconfigurable hardware. Over the past two decades, FCCM has been the place to present papers on architectures, tools, and programming models for field-programmable custom computing machines as well as applications that use such systems.
 
The work presented at Vancouver was a novel heterogeneous acceleration platform to accelerate computing-intensive applications like face recognition. Such platform not only could greatly improve the performance, but it also could reduce the energy consumption. This was possible thanks to the interconnection of two advanced system-on-chip accelerators: an NViDIA Tegra platform, which includes a CPU and a GPU, and a Xilinx ZYNQ platform, which combines a CPU and an FPGA. All the four components accelerated different parts of the algorithm, achieving better results than each component alone. 

Ten minutes with... Oscar Palomar, Barcelona Supercomputing Center

Thu, 2015-04-16

Oscar Palomar is a senior researcher in the Computer Architecture for Parallel Paradigms group at Barcelona Supercomputing Center (BSC). His research interests relate to vector and low-power computer architectures. In the ParaDIME project, he works closely with fellow BSC researchers Santhosh Rethinagiri and Ruben Titos, while the principal investigators are BSC’s Adrián Cristal and Osman Ünsal.

 

1. What are your research interests? What do you most enjoy researching?
My research is mainly in two related areas of computer architecture: vector and low-power architectures. Vector architectures have been around for a long time, but we are looking at them from a new perspective and for new types of application, such as databases. In scalar architectures, used in conventional processors, each instruction defines an operation, meaning that if you have to add two arrays of numbers you will have to add each pair of values sequentially, with the add instruction in a loop. Vector architectures allow adding the whole two arrays using only one instruction, which is more efficient due to multiple reasons, for example that the instruction only has to be read and the processor prepared for the operation once. 
 
When vector architectures were dominant in supercomputing, the most important constraint affecting their design was not power; computers were built to run as fast as possible, as cheaply as possible. Today, technology trends have made power a key issue. Vector architectures require a small increase in the amount of power required but they ensure that the operations go much faster and therefore represent a more energy-efficient computer design when vector operation is common in the workloads. We’re now seeing a return to vectors, with some designs, such as that of the Intel Xeon Phi, approaching vector architectures in the instructions they offer, although to my mind these could be made more efficient by using vector implementation as well as instructions.
 
2. What areas are you concentrating on within the ParaDIME project?
Within the ParaDIME project, we have published one paper on vectors, but I’ve mostly been working closely with BSC researchers Santhosh Rethinagiri and Ruben Titos on heterogeneous  and multi-core architectures. You can find out more about Santhosh’s work in the interview with him on this website. Ruben has been researching how to make inter-core communication more efficient. In a multicore processor (a chip with several processing units), there are two main approaches to communicating and exchanging data. The first is shared memory, where all the cores in the processor access a single memory address space, while in the second, each core accesses its own private memory address space and uses message passing to communicate that it is sending/receiving data to/from another core. 
 
One of the assumptions of the ParaDIME project is that message passing is more efficient than shared memory; however, most chip manufacturers implement shared-memory architectures and this situation looks likely to continue for multiple reasons. An important one is that most applications use shared memory. Ruben is therefore looking at techniques to improve the efficiency of message passing on shared-memory architectures and has proposed a way of avoiding redundant copies of data. This has two benefits: it improves performance and reduces energy consumption, as moving data requires high amounts of energy. 
 
3. How did you come to be a computer-science researcher? Have you always enjoyed computer science?
I suppose I first got interested in computer science when my parents bought a Spectrum computer when I was a kid, which I started to program and have fun with. At school I always liked science, particularly physics and maths, although I only remember having one programming class and I don’t think I got much out of it. I also remember a philosophy teacher who told us that if we didn’t understand his logic class, we should forget about studying computer science. We didn’t really understand his class – although I think that had more to do with his teaching than anything else – but I think this might have actually motivated me more. 
 
I went on to study computer science at Barcelona Tech (Universitat Politècnica de Catalunya) and realised that the topic which interested me most was computer architectures. When doing my final project on computer architectures a professor suggested that I do a PhD, which I went on to do at the same university. 
 
Things might have changed since I was at university, but one thing I felt was missing from the course at that time was a focus on power. I think it’s also really important for computer scientists to learn about different areas: programmers need some architecture awareness, for example, and vice versa.
 
4. Why is it important for computers to be more energy efficient? What are the major technical challenges which need to be overcome to achieve this?
Obviously the less energy computers use, the lower the energy costs, especially over the long term. Every time you switch on your computer you’re using energy, so if it were more energy efficient it would mean that if you use it over the next three years, the result would be three years’ worth of energy saving. 
 
For mobile devices, energy is the most important constraint, due to the need to reduce the number of times you have to charge the battery. Batteries are crucial, in fact – we need batteries which last longer and are faster to charge, and/or have a charging system in the background – but these are out of the hands of computer scientists; all we can do is improve the energy efficiency of devices. 
 
Heterogeneous architectures are definitely the way to go to achieve energy savings: now we need to work out what they will look like and how to make them usable for programmers, so that they don’t need to have in-depth knowledge of hardware to program them. At the moment, for example, we don’t have enough compiler support  – computer programs that transform source code written in a programming language into instructions that can be executed by the computer– for vectors. This means that we have to write low-level code to use directly the vector instructions. Using other accelerators as GPGPUs or FPGAs is also non-trivial and demanding for the programmer.
 
5. What have you learned from working with other researchers on European projects? Do you think it’s a productive experience, despite cultural and linguistic differences?
Working with researchers from other institutions has helped me get perspective on where our area of research lies in the hardware/software development chain: for the researchers at Neuchâtel, for example, BSC works at the low-level end, whereas for IMEC we are more high level. Also, as we’re experimenting with new ideas which don’t currently exist on any chips, at BSC we’re using simulations to try out our results, whereas at Neuchâtel and Technische Universität Dresden they are using real hardware. This means that our timescales and research methods are significantly different (thousands of times slower), and that we have to work carefully together to ensure that the results are meaningful.
 
6. How is ParaDIME different from other projects focusing on energy-efficient computing, such as Mont-Blanc? What would a sequel to the ParaDIME project look like?
Like Mont-Blanc, ParaDIME is also looking at small, ARM-based, energy-efficient cores, although we are looking at more heterogeneous architectures. However, Mont-Blanc aims to build a supercomputer prototype, whereas ParaDIME’s research is focused more on data centres
 
ParaDIME uses the Scala and AKKA programming models, which are not intended for supercomputing. These are examples of actor models; that is, inherently concurrent models which assume that everything is an actor which can make local decisions, create more actors, send messages to other actors, and determine how to respond to the next message received. The next steps which ParaDIME could take would be to integrate different elements by getting an actor model to use efficient support for message passing and use heterogeneous accelerators to improve the implementation of the model. 
 
7. What do you think that BSC, and perhaps even Catalonia more generally, bring to this area of research?
There is a tradition of researching architectures, and specifically vector architectures, at BSC and Barcelona Tech. BSC Director Mateo Valero has published highly influential papers on vector architectures, so I think that when people is interested in vector architectures, they consider BSC a reference. As for Catalonia, I think there’s a tradition of critical thinking (critical perhaps being the operative word) which is useful when it comes to research. 
 
8. What are your predictions regarding the future of information and technology systems, especially regarding energy consumption and innovative architectures?
I think it’s dangerous to start making predictions, but I think we can safely say there will be a lot more connected devices and many of these will be working more autonomously. For that to work, and for smart cities to be really feasible, we will need large-scale data centres to process the data. We will also need very energy-efficient, small devices to process as much data as possible locally for things such as smart traffic distribution. This means decisions made locally but with global processing; for both of these areas, energy efficiency is of key importance. 

ParaDIME at COOL Chips XVIII

Tue, 2015-04-14
http://www.paradime-project.eu/system/files/news/images/1.jpg
http://www.paradime-project.eu/system/files/news/images/2.jpg

Researcher Santhosh Kumar Rethinagiri presented "An Energy Efficient Hybrid FPGA-GPU based Embedded Platform to Accelerate Face Recognition Application" in the Session "Object recognition techniques" held at IEEE Symposium on Low-Power and High-Speed Chips COOL Chips XVIII at Yokohama (Japan). S. K. Rethinagiri asserts that "heterogeneous computing is the way to tackle dark silicon problem".