Sharing papers

Long time I do not write anything here. I think it is time to resurrect this space. Probably, from now on I will not write long posts anymore, due to lack of time. However, I will try to write short posts about my research and anything related to it, so that blogging here will become part of my job as a researcher.

Today I want to just announce that I just sent to to ArXiV a draft of the paper that I submitted to ECRTS. Here it is. I apologize in advance for the mistakes that are surely present in this draft.

Why submitting to ArXiV a paper that has not been accepted yet? I think it is a shame that we, as a community (I mean: the real-time systems research community) do not make use of modern technology for sharing our research. I do believe that we many opportunities in front of us, and we do not take advantage of any of them. Maybe because it requires some effort from our part. As a matter of fact, the number of papers on real-time research in ArXiV is ridiculously low.

Therefore, as usual, I decided to start myself with my little contribution. If you want to comment my paper, ask questions, contribute, or anything else, please write your comments below this post. It will really be a pleasure for me to respond to your questions, and also to take criticism.

Here we go!

Parametric Schedulability Analysis of Fixed Priority Real-Time Distributed Systems

Youcheng Sun, Romain Soulat, Giuseppe Lipari, Étienne André, Laurent Fribourg

Parametric analysis is a powerful tool for designing modern embedded systems, because it permits to explore the space of design parameters, and to check the robustness of the system with respect to variations of some uncontrollable variable. In this paper, we address the problem of parametric schedulability analysis of distributed real-time systems scheduled by fixed priority. In particular, we propose two different approaches to parametric analysis: the first one is a novel technique based on classical schedulability analysis, whereas the second approach is based on model checking of Parametric Timed Automata (PTA). The proposed analytic method extends existing sensitivity analysis for single processors to the case of a distributed system, supporting preemptive and non-preemptive scheduling, jitters and unconstrained deadlines. Parametric Timed Automata are used to model all possible behaviours of a distributed system, and therefore it is a necessary and sufficient analysis. Both techniques have been implemented in two software tools, and they have been compared with classical holistic analysis on two meaningful test cases. The results show that the analytic method provides results similar to classical holistic analysis in a very efficient way, whereas the PTA approach is slower but covers the entire space of solutions.

5 thoughts on “Sharing papers

  1. Hi, I have a couple of comments:
    1) I’d like to understand why the focus on arXiv, as there’s a variety of websites where you can upload a paper along with metadata
    2) I’m not that inclined to publishing on the web submitted papers, as it generates easily confusion. Search engines index them immediately, and make them discoverable in connection also to the conference it has been submitted to. However, the work may be rejected, improved and resubmitted elsewhere. So, as a researcher, what should I do if I find one such works on the net that interests me ? Should I mention it in my own works ? However, it may very well be only a temporary work that the author may be willing to remove if in the end it gets rejected.
    3) the job of reviewers when searching for plajarism becomes a nightmare. Imagine a conference with a double-blind process — what if a reviewer finds the very same paper on the web ? should he reject it because it’s a copy of someone else’s work ? Ok, if the authors paid attention, they would write “submitted to”, and later go to update the information, but I don’t expect this to be the case, as people are overly busy and easily forget. Once a hit is found, an investigation is needed on whether that paper was already published on that prior conference and the authors are trying to publish again only incremental results etc., I mean, that introduces extra work.
    4) what about the restriction that one should submit only unpublished works to a conference, as it can often be read in CFPs ? Publishing on the web is a form of publishing and it might violate copyright agreements.

    I’d appreciate your (or anyone else’s) view on the above points.

    • Ciao Tommaso,

      actually two couples of comments!

      1) arXiv is the oldest, the most “open” because entirely supported by an University. It has been used by Physicists since more than 20 years now, and it is very well known (maybe not so by computer scientist people…). The idea is that arXiv stores “pre-prints”, which means things papers that are unpublished or that are not yet in their final form. The copyright on those papers is still mine. Also, it is quite clear on the web page that the paper has only been submitted for evaluation, and not yet peer-reviewed. Finally, it does not bugs me with tons of spam e-mails per day (see ResearchGate, which I am thinking of abandoning).

      2) Why not? As long as it is clear that the work has not been accepted, what’s wrong with publishing things you have written? Publishing on blogs, or publishing technical reports is not so different. I think the reader can judge the validity of the research by himself.

      3) In fact, I think double review is just plain wrong. As for plagiarism checking: I hope it will never become automatic. The reviewer should always check if the paper has alrady been published or not. And anyway, this concerns are quite minor and should not restrict my freedom to make my thoughts public. After all, all my research is funded with public money, everybody should be able to read it.

      4) Again, this is not the case. Copyright is mine until I give it away. I think it should remain mine also after the paper is published, but this is not always the case, unfortunately. Currently the accepted rule is that the paper should not have been published in a peer-review conference or journal with an ISBN. Otherwise, it is ok to “re-publish”. For example, since WATERS is not a workshop with officially printed proocedings, it should be ok to republish that material somewhere else (although there is always some moron that thinks otherwise).

      • Ciao Peppe,

        thanks for the answers, which triggered a few further neuronal sparks in my head, so I have to drop a few further comments. As a premise, I can say that, ideally speaking, I would share your desire for a completely free circulation of information, including and especially technically advanced contents and research results (perhaps I just left academia since a too short time, and my mind on these bits will completely change after a sufficiently long corporate education…).

        However, we have to face with the reality. Peer-reviewing is there to allow researchers evaluate on the technical soundness and novelty of a submitted scientific paper. As a researcher, I’m eager of seeing my work peer-reviewed and of course accepted by the peers, as it is a sign that my research was sound, no flaws are there (perhaps), etc…. Without such reviewing, I could publish on my personal website as much non-sense as I’d like to, but no one would care about that contents, as it’s not been approved/reviewed by the scientific community.

        Now, back to the point of publishing my submitted paper (on arXiv or my webpage or wherever else) on-line:

        a) the reviewers should evaluate the technical novelty as well (at least, that’s what we’re asked to do as reviewers as well); if an author submits a work that is very similar to one he/she already published, we recommend to reject; it’s not a matter of copyright or whatever, it’s just that the paper is not saying anymore anything new. All its technical contents is already well-known, indeed there’s a published work by the same author reporting exactly (or more or less) the same ideas, discussion, results.

        b) if the point a) above is true, then, as you correctly point out above, how is publishing the ideas, contents, discussion, results on a website, how is this different from publishing on an official venue ? Anyway, the novelty, the big surprise, the change-the-world stuff is now public, it’s published, it’s known to the community, to other researchers, to the world. Equivalently, it’s not new anymore. So, if a reviewer faces with the same contents or very similar contents, what should he/she do ? accept, simply because the previously published version on arXiv (or on the author’s webpage) had a different title, a couple of paragraphs different, and a few citations less ? IMO, the novelty is not there anymore — it’s gone once the paper has appeared in public, doesn’t matter whether it’s an official IEEE/ACM venue, or a website, or a blog post. But, you can very well say, the reviewer’s behaviour should simply be compliant with the conference/journal policy. Take for example this: “The IEEE guidelines are that the submission should contain a significant amount of new material (i.e., material that has not been published elsewhere).” What does that “elsewhere” mean ?

        c) if the paper is accepted, and the copyright transferred as required by many venues/editors, then the previously published version should be removed, I guess — so, was it worth publishing the paper for a few days for free, compared to a lifetime of “closeness” in which the paper has to be paid ~20 USD ? Of course, this doesn’t apply to Open Access journals, where the copyright remains to the authors.

        And thanks again for whatever comment… as usual, this post came out longer than foreseen!

      • I think we will also have other opportunities to talk about this important topic, because it is something that really interests me.
        I will start from the bottom.

        c) If the paper is accepted, needs not to be removed. The accepted paper will be a polished version of the submitted one, so the copyright usually does not apply, and if it does nobody enforces it. Publishers are today very much careful on not raising further “flames” about open publishing, so they careful avoid any strict enforcement of their copyright. In any case, publishers allow everybody to publish pre-prints on the author’s web page or on Research Gate, so what’s the difference with arXiv?

        a) scientific novelty is strictly related to peer review. The reviewers check novelty wrt to other peer-reviewed and accepted research, not against any material. So, if some crazy hacker writes about a crazy idea on its personal web page, and then I write a scientific paper on that very same idea, the reviewers will not be concerned. Of course, morally, the hacker is the inventor and I am coming late, but this is how it works now. And you know well that sometimes people reinvent the wheel and get published and cited much more than the original inventors…

        b) Well, this is actually something that nobody knows, and has a lot to do with the future of “scientific publishing”. Peer-review is now fundamental because it acts as a filter, and helps external people evaluate the importance of the work, and implicitly, the value of the researcher. So, peer-review is a pillar of the academic machine. However, Internet is challenging this a lot, and maybe things will evolve in different directions in the future. For now, read this, is more food for thoughts:

  2. Nice comments!

    For a), if it’s another paper, then the story is different. If I make a scientific evaluation, evaluation, implementation, study of feasibility/viability, etc. of a technique that only appeared roughly as an idea ona webpage (doesn’t matter whether it was my or someone else’s idea), then I’m adding knowledge with my paper. What really puzzles me, is if it’s actually the same paper that is being submitted for peer-reviewing.

    For b), yes, “scalability” of the publication system has to puzzle all of us. It’s already impossible to keep us up to date. On a related note, sometimes I often feel the “phantom” that all the big change-the-world things have already been discovered / invented, and that nowadays we merely manage to go incrementally, as compared to some big inventions of the past, and just look at software or OSes to get that feeling. I mean, in a field, one should see some big evolution at the beginning, but only tiny evolutions after some years.

    Nonetheless, this study you point out highlights that the scientific papers are growing at an unprecedented rate. How do you see such discrepancy ? Are we inventing far more things than our fathers used to do ? Are we publishing far more rubbish than our fathers used to do ? Should we tighten the selectivity for scientific publications ? I mean, being more selective because we’re publishing too many incremental results, not because otherwise we generate too much knowledge! Is all this growing in academic venues merely driven by the need for continuous renewal and progress of academic careers ? Or, is it all right and just perfect as it is now ?


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s