Cores and clouds

I just returned back from the kick-off meeting of a new project I am involved in. It is a project on clouds. No, not cloud computing (not only, at least!), and no, not about cumulonimbus. What I mean is that the project is less about current technology and more about what could be the technology of the future.

The S(o)oS project is financed by the EU under the FET (Future and Emerging Technologies) program, that is concerned with loooooong term research, in the horizon of 10-20 years. With such a long perspective, it is inevitable to  end up making phylosophical discussion about the future of computing.

Computing technology evolves so fast that it is impossible to forecast what “computing” will be in 20 years. However, we now see a clear trend towards many core systems, and some people say we will have thousands of processing  cores  on the same chip. Another clear trend is toward distribution and cloud computing. Probably, computing will be very different from what we have now.

In the S(o)oS project we will speculate on the possibility to use a new operating system architecture for massively distributed and parallel processing systems, by using a service-oriented approach. Hence the acronym: Service-oriented operating Systems.

During the kick-off we had a long and fruitful discussion with the other partners on current and future programming models. It is clear that programmers cannot continue to develope like  10-15 years ago. The long era of sequential programming is coming to the end, and programmers will increasingly  need to design concurrent and parallel programs. As someone pointed out: the free lunch is over, in the sense that programmers cannot simply rely on the ever increasing processor clock speeds to solve their performance problems. In the past, writing efficient programming required more time and effort than just waiting for  higher speed processors, so nobody was caring about deep performance optimization.

However, clock speed does not increase anymore, due to cost and physical limitations. To keep up with more law (and avoiding financial cracks) chip producres now put more than one core on the same chip. Therefore, the programmer has to make an effort now: if he wants to speed up its program, it has to split sequential code into parallel code.

However, concurrent and parallel programming is not easy for many reasons. First of all, not many courses teach how to program concurrent code. Second, concurrent code is inherently non-deterministic, hence it is more difficult to spot concurrency bugs (the so-called race conditions). Also, synchronization of concurrent code is necessary but has some overhead: it is difficult to understand how to minimize overhead  so to obtain maximum speed-up.

Current languages do not help the programmer. With a few notably exceptions, the most popular programming languages today have been designed for sequential programming.  Standardization committees are coming to the rescue: for example, in the next version of the C++ standard there will be language support for concurrency and parallelism. However, the general sensation is that there is still a long way to go.

Another problem is that many of these languages assume a shared memory programming paradigm. Why shared memory is bad? Well, this is a long story that deserves its own post. Be patient until next time!


3 thoughts on “Cores and clouds

  1. Roberto, you are right, I have overlooked this blog for too long now! A new post is still in the “backyard of my mind” since too long, I hope to write something on multicore very soon.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s