The standard introduction

As a reviewer in the “Real-time Systems” research field, it happens to me to review many tens of papers every year, both conference and journal papers. And most of them start with the typical standard sequence of statements:

“Nowdays, many applications have real-time requirements…”

“Real-time is a widespread requirement in modern distributed applications…”

and other boilerplate material to fill up the introduction section.

Writing the introduction is one of the most dared tasks for a PhD student (at least for my students!). So, usually the introduction is written by the senior researcher that uses its experience to give an overview of the topic. Since fantasy is limited, in most cases the introduction is then a patchwork of typical standardized sentences. The fact than there are web-sites devoted to the problem, helps to make the whole thing even more standardized.

For example: in many of the paper I read the authors continue by defining a real-time system as “a system whose correctness depends not only on the correctness of the outputs but also on the time at which they are produced“. Now, listen: to explain the definition of a real-time system is useful for novice readers that may be unfamiliar with the research in this field: however, in most cases the authors continue the paper by assuming complex and abstruse concepts from their previous papers that even specialist reviewers find difficult to understand without reading 4-5 additional papers. So, please stop defining a real-time system in the introduction, the readers will all be very grateful.

Very often the introduction contains references to application domains as avionics, automotive, telecom, etc. These statements have also the purpose to show that the authors are well aware of the requirements of real applications, and that their model is not just-another-useless-mathematical-abstraction. More often than not, however, the authors continue with an abstract system model, fill up the paper with equations and algorithms (of which they extensively discuss the complexity), and conclude with simulations using synthetically generated task sets, never going back to the original proposition of dealing with actual applications.

These patterns are so common that I just started to entirely skip reading the blah blah in the introduction, to concentrate on the hard stuff in the middle. It saves me some time and I can immediately focus on the important stuff.

I have to admit that in most cases I have also followed the crowd: I have written a lot of standardized and pretentious material in the introduction, and sometimes also in the abstract: shame on me!

I also noticed that in other closely related fields, like theoretical computer science and mathematics, they often skip this initial piece of hypocrisy and go directly to the point: definitions and theorems. So, my modest proposal is to start skipping this initial part: in most cases our work has nothing to do with real applications, so let’s stop pretending, and let’s start with the stuff we all like to write and to read. Or at least, let’s reduce the blah blah to the bare minimum! We will save time, trees, pixels!

What do you think?