A tiny parser in C++

Just before the end of the year, I would like to post here one small programming project on which I have been working over last month in my (more or less) free time. It is a Tiny Parser (TIPA) written in C++11 and here is the first version, still in alpha:

https://github.com/glipari/tipa

I designed this parser borrowing some of the concepts of the Boost::Spirit parser. However, unlike Spirit, my parser is simpler and almost entirely template-free. This means that it is easier to use, but also much less powerful than Spirit.

One thing I did not like about Spirit is its steep learning curve. Sometimes I was making a stupid mistake in declaring an object, but crawling through the many lines of the error messages provided by g++ was frustrating. Sometime my parser did not work but I couldn’t understand where was the problem. Maybe one of those BOOST_ADAPT macros? Or maybe by parser was not backtracking correctly? And then, I could never configure a decent error message for my parser. Not to forget the large amount of time needed to compile.

Instead, what I did like about spirit was its ability to write parser rules almost as one writes EBNF rules. I wanted that feature, and at the same time I wanted to avoid all the complexity.

So I wrote a library which provides this specific feature: to write a parser, you have to write your rules as a EBNF. Unfortunately, extracting the information from the parsed text is less automatic than Spirit: you have to write you “collecting” functions explicitely. However, this is not too difficult once you know how to do it. Maybe a little annoying, I know, but at least you get decent error messages when you do an error, and you can easily debug your code.

Here is an example of a simple calculator grammar:

    // These are the parsing rules
    rule expr, primary, term, 
	op_plus, op_minus, op_mult, op_div, r_int;
    
    // An expression is sequence of terms separated by + or -
    expr = term >> *(op_plus | op_minus);
    op_plus = rule('+') > term;    
    op_minus = rule('-') > term;

    // A term is a sequence of primaries, separated by * or /
    term = primary >> *(op_mult | op_div);
    op_mult = rule('*') > primary;
    op_div = rule('/') > primary;

    // A primary is either an integer or 
    // an expression within parenthesis
    primary = r_int | 
	rule('(') >> expr >> rule(')');

    // An integer is an integer!
    r_int = rule(tk_int);

Now, this is it. Notice that you can define and use a rule, and assign it a value later on. In this way, you can easily build recursive rules. (If you want a comparison with latest Spirit, see here: http://boost-spirit.com/home/2013/02/23/spirit-x3-on-github/).

Wait, this are only the rule we still have to parse a string. Here is how to do:

    // preparing the "context" of the parser
    parser_context pc;
    pc.set_stream(str);

    bool f = false;
    try {
	f = expr.parse(pc);
    } catch(parse_exc &e) {
	cout << "Parse exception!" << endl;
    }

Basically, you have to prepare an object of type “parser_context”, which will be passed along the parsing. This must be initialized with a stream object (in our code snippet, the str object). Then you take the “root” rule (in our case, the expr object), and call the parse() method, passing the context.

This parsing just returns true (if the expression was succesfully recognized by the parser) or false. If an error is found, it can also raise an exception, depending on the type of error (the error handling is still in a very preliminary status in the library).

Knowing if the expression is correct or not is indeed useful. However, most of the times we need to do something with the expression, for example computing the result, or simplifying it, or storing it somewhere. Therefore, how can we extract useful information during the parsing? Suppose you want to build a tree which represent our expression. Here we go: first, I define the classes that compose our expression tree,

class tree_node {
public:
    virtual int compute() = 0;
    virtual ~tree_node() {}
};

class op_node : public tree_node {
protected:
    shared_ptr<tree_node> left;
    shared_ptr<tree_node> right;
public:
    void set_left(shared_ptr<tree_node> l) {
	left = l;
    }
    void set_right(shared_ptr<tree_node> r) {
	right = r;
    }
};

class leaf_node : public tree_node {
    int value;
public:
    leaf_node(int v) : value(v) {}
    virtual int compute() { return value; }
};

#define OP_NODE_CLASS(xxx,sym)		 \
    class xxx##_node : public op_node {  \
    public:                              \
      virtual int compute() {            \
	int l = left->compute();         \
	int r = right->compute();        \
	return l sym r;                  \
      }                                  \
    }

OP_NODE_CLASS(plus,+);
OP_NODE_CLASS(minus,-);
OP_NODE_CLASS(mult,*);
OP_NODE_CLASS(div,/);

A leaf in this node is a simple integer. A non-leaf node is an operation between the left and right subtree. For example, if the node is a plus_node, then computing its value consists in computing the left and the right sub-trees first, and then summing the two. Hope it is clear!

Then I use an helper class which is used to build the tree incrementally.

class builder {
    stack< shared_ptr<tree_node> > st;
public:
    void make_leaf(parser_context &pc) {
	auto x = pc.collect_tokens();
        if (x.size() < 1) throw string("Error in collecting integer");
	int v = atoi(x[x.size()-1].second.c_str());
	auto node = make_shared<leaf_node>(v);
	st.push(node);
    } 

    template<class T>
    void make_op(parser_context &pc) {
	auto r = st.top(); st.pop();
	auto l = st.top(); st.pop();
	auto n = make_shared<T>();
	n->set_left(l);
	n->set_right(r);
	st.push(n);
    }
    
    int get_size() { return st.size(); }

    shared_ptr<tree_node> get_tree() {
	return st.top();
    }
};

It uses an internal stack where the partial results are stored. When an integer is found by the parser, we want to call the make_leaf() method which builds a leaf_node and push it into the stack. The integer is read from the parser_context() object using method pc.collect_tokens(). When an operation is found, the corresponding node is built using the make_op() template function, where template parameter T is the corresponding node operation.

Last thing we need to do is to call those methods at the right moment during parsing. Here is how to do it:

    builder b; 
    using namespace std::placeholders;

    r_int   [std::bind(&builder::make_leaf,           &b, _1)];
    op_plus [std::bind(&builder::make_op<plus_node>,  &b, _1)];
    op_minus[std::bind(&builder::make_op<minus_node>, &b, _1)];
    op_mult [std::bind(&builder::make_op<mult_node>,  &b, _1)];
    op_div  [std::bind(&builder::make_op<div_node>,   &b, _1)];

It’s not rocket science, don’t worry! I first create the builder object b. Then, I tell each rule which function must be called when the rule is successfully evaluated: but, since make_leaf() and make_op() are not simple functions, but class methods, I have to transform them into functions.

To do this I use the std::bind() library function. I need to specify an object (in this case it’s b), and bind it to the method. The standard library function std::bind() which takes the method, a pointer to the object, and I specify that the argument (of type parser_context) of the method becomes the first argument of the resulting function. It may look a little confusing at first, but I suggest you give a deeper look at the std::bind function description in the reference manual before trying to modify my code.

That’s it. You can find the code above in the library examples, as tipa/example/arithmetic.cpp

If you are curious and would like to give it a try, please download it from github and let me know what you think. The license is GPLv3. I am available to implement your requests for features if they are reasonably simple!

Happy new year!

The standard introduction

As a reviewer in the “Real-time Systems” research field, it happens to me to review many tens of papers every year, both conference and journal papers. And most of them start with the typical standard sequence of statements:

“Nowdays, many applications have real-time requirements…”

“Real-time is a widespread requirement in modern distributed applications…”

and other boilerplate material to fill up the introduction section.

Writing the introduction is one of the most dared tasks for a PhD student (at least for my students!). So, usually the introduction is written by the senior researcher that uses its experience to give an overview of the topic. Since fantasy is limited, in most cases the introduction is then a patchwork of typical standardized sentences. The fact than there are web-sites devoted to the problem, helps to make the whole thing even more standardized.

For example: in many of the paper I read the authors continue by defining a real-time system as “a system whose correctness depends not only on the correctness of the outputs but also on the time at which they are produced“. Now, listen: to explain the definition of a real-time system is useful for novice readers that may be unfamiliar with the research in this field: however, in most cases the authors continue the paper by assuming complex and abstruse concepts from their previous papers that even specialist reviewers find difficult to understand without reading 4-5 additional papers. So, please stop defining a real-time system in the introduction, the readers will all be very grateful.

Very often the introduction contains references to application domains as avionics, automotive, telecom, etc. These statements have also the purpose to show that the authors are well aware of the requirements of real applications, and that their model is not just-another-useless-mathematical-abstraction. More often than not, however, the authors continue with an abstract system model, fill up the paper with equations and algorithms (of which they extensively discuss the complexity), and conclude with simulations using synthetically generated task sets, never going back to the original proposition of dealing with actual applications.

These patterns are so common that I just started to entirely skip reading the blah blah in the introduction, to concentrate on the hard stuff in the middle. It saves me some time and I can immediately focus on the important stuff.

I have to admit that in most cases I have also followed the crowd: I have written a lot of standardized and pretentious material in the introduction, and sometimes also in the abstract: shame on me!

I also noticed that in other closely related fields, like theoretical computer science and mathematics, they often skip this initial piece of hypocrisy and go directly to the point: definitions and theorems. So, my modest proposal is to start skipping this initial part: in most cases our work has nothing to do with real applications, so let’s stop pretending, and let’s start with the stuff we all like to write and to read. Or at least, let’s reduce the blah blah to the bare minimum! We will save time, trees, pixels!

What do you think?

My experience with remote teaching

My course on Object Oriented Software Design is finished. As I explained at the start of the course some month ago, this year I taught the course remotely. I used an home-made system, with open source or freely available software: Google Hangouts was used as a teleconferencing software; Xournal and a Wacom tablette for simulating a whiteboard; Record My Desktop to record the lectures, which were later uploaded on You Tube; LaTeX+beamer for preparing the slides; Google Sites  for the course website, and Google Groups as mailing list and forum for discussions with the students. If you want to have a look, you can visit the web site. The audience consisted of approximately 15 students, 1/3 graduate students, and the rest PhD students (not all of them on Computer Science, though). The course was basically about C++ programming and advanced techniques like template metaprogramming, functional programming, etc.

Now that it is over I can sum up the experience.

The Good

The first thing to say is that the students liked it a lot, and this came a little bit as a surprise to me also given the initial difficulties with the software, the setup etc. In particular, they valued so much the possibility to watch the lesson over and over again on You Tube. They loved the possibility of moving forward on the boring parts, or listening again on the most difficult parts. Also, if somebody could not be physically connected that day due to some other business, they could later review the lectures off-line. Finally, they could connect from 2 different campuses in Pisa and Pontedera, so there was no need for them to spend time on the train between the two cities to physically attend the lectures.

One thing that some of them appreciated was the fact that at the beginning I spent some time on my editor writing programs, compiling and running them, to demonstrate the main features of the language, the pitfalls, some tricks, etc.. Most of these programs were already half baked, and I would only modify them on the fly and show the effect on the screen.

Finally, they appreciated the fact that all the produced material was on-line, so they had everything they could need for the assignments, and no need to take notes.

The Bad

Well, not exactly everything. Unfortunately I could not hire a proper lab assistant, so lab exercises were reduced to me writing the programs, and them watching. So, from an interactive point of view I lost a lot compared to a classical front-lecture, where at some point I would give the assignment and walk between them to assist, make comments on what they were doing, etc. I tried to substitute this with on-line help: fast response to help request by e-mail or through the forum. It is not the same, though.

The Ugly

One thing I did not like was the lack of immediate feedback. I was sitting alone in a room, to reduce ambient noise, watching my slides on the screen and talking for 2 hours, with 15 minutes break in the middle. As a consequence, sometimes I went too slow, in certain occasions I repeated the same concept too many times, some times explaining some mechanism in too many details. I missed a lot human contact and immediate face-to-face feedback.

Of course, this was not an on-line course, as you can find on Coursera, Udacity, etc. It was simply me recording myself while lecturing, without any post-processing. So, when I made an error while speaking, the error would remain there in the videos and will never been corrected. Post-processing requires a huge amount of time and effort, and professional technicians dedicated to it, and I neither had the time nor the resources for doing that.

The Future?

Today, I had the opportunity to talk with my colleagues at the Technical University of Eindhoven regarding the future of teaching and our profession. We agreed that our profession is going to change very soon. New technologies and on-line course will change completely the role of the teacher and the manner of teaching classes.

We think that for many technical courses, the front-lectures will be gradually substituted by on-line broadcasts. What will be left is lab courses, where teaching assistants spend their time working on small projects together with the students, guiding them around the most common pitfalls. For a few courses, this lab work could require a substantial amount of hours: in Computer Science, I am thinking of advanced software engineering classes, where students will learn what does it mean to work in group, work on existing code-bases, etc.

However, most courses, and in particular the most basic or theoretical ones, will be taken completely on-line. Few good professors will distribute their lectures across the world. A few universities could chose to have professor record their own lectures, other universities would simply “buy” existing lectures by the famous professors and distribute them to their students.

What about exams? I still think that evaluation should be done in presence. Human interaction is essential when it comes to evaluating somebody’s ability and intelligence. It is not only about competences: sometimes it is important to understand if the student really got the gist of the course, and you do not understand that by using standardized tests.

Of course, the examiner can get important help and information by using new technologies. For example, in preparing the tests, it could use on-line tools maybe shared by different universities and tested on thousands of students, and build statistics on tests which can help him to calibrate the course and the exam.

For sure, there will be the need for less “professors”, and more “teaching assistants”. This could actually be seen as an opportunity to relieve the smart researchers from heavy teaching duties so that they could spend more time on research; for sure universities have a good opportunity to cut costs in the long run.

Certainly, universities need to be prepared to the change, and need to experiment with different techniques and configurations.

Personal conclusions

Next year I will probably repeat the experiment. I need a way to make it more entertaining for me: maybe adding a second screen to just see the student faces could help me get more feedback, I don’t know. If you have suggestions, please write in the comments! In the meanwhile, I will go again through my videos, to see what I did wrong and how I can improve on it.

Mixed criticality

“Mixed Criticality systems” seems to be a hot topic: just search on Google Scholar for “mixed criticality” and see how many papers have been published recently. It even became one of the main keywords in the latest IST Workprogramme of the EU.

A mixed criticality system is a system where different levels of certification are required for different subsystems. Some subsystems are considered highly-critical, and require a higher level of certification; some other parts of the system are less critical and can be subject to lower levels of certifications. The problem is that we must make sure that an error in the the less critical subsystems will not compromise certification of the high-criticality subsystems.

Concerning scheduling, one nice mathematical problem is how to make sure that high-criticality subsystems will be guaranteed correct under all circumstances, at the same time without under utilise system resources.

A high-criticality task is modelled with more than one worst-case computation times. For example, it can be modelled with two: one “typical” WCET, denoted as C-LO, is the one that the task will request most of the times. However, every once in a while, the task may require up to C-HI > C-LO.

Since in most cases the execution time of the high-critical task is low, than we can allocate the processing resources and admit low-criticality tasks assuming that the computation time is C-LO. However, when the computation time switches to C-HI, then we must still guarantee the high-criticality task (that has been certified), and drop some low-criticality task if necessary. In other words, we must guarantee the high-criticality task under all conditions; whereas, the low-criticality task can be dropped sometimes, when necessary.

As anticipated, many papers describe the problem and propose solutions. I suggest to start from the papers at UNC group, one of the most respect research group in real-time systems around. Here are a few links to start with (1) (2) (3).

I am also involved in the organization of a workshop on the topic, together with Laurent George. Here is the link to the web-page. Please consider submitting a paper, or just participating!

Remote teaching

Due to my current position at ENS-Cachan, this year my lectures on Object Oriented Software Design are done remotely.

Today I just delivered my first lecture, and I have positive and negative feelings.

First, we tried to use a open-source software, BigBlueButton. It is a very nice idea in principle: people can connect from anywhere, it integrates webcams,  microphones, slide sharing, handwriting on the slides, desktop showing. I decided that in the first lecture, all students will collect themselves in a physical class, and I was in front of my PC in my office. This would simplify things (only two parties). However, it did not work out well. The problem is that BigBlueButton  is not very stable: it uses flash (brrr…) and java (brrr….) and it crashes a lot. And the bandwidth usage is maybe not so good.  Of course, we have tried it in the past days, and it looked as it could work. But today it crashed three times in the first few minutes, so we decided to give up.

Of course, we had two backup solutions: Skype and Google Hangouts. We decided to go for Skype: we have a paid license that allows us to send video, voice and share the desktop at the same time.

It worked out pretty well: at some point, due to bandwidth shortage, we disabled video, but generally it worked ok. On my desktop I was alternating the slides and a programming session using Kate and Konsole, and the desktop was sent out via Skype so the students could see the slides and my programming session in real-time. While programming, I was commenting what I wrote on the screen. I would change a few lines of code, and recompile, showing the compiler errors, or the program output and confronting with the program listing.

I experimented this technique in the past and it usually works very well with students: they get an immediate feeling of what it is like to program, especially in C/C++.

What I missed a lot is feedback from the students.This was to be expected: it is very difficult in general to interact remotely. Add to this that Italian students are usually very shy and they rarely ask questions, and you will have a more or less complete picture.

So I just open a forum on Google Groups to ask for feedback and to give feedback. And I hope students will be less shy next time.

By the way: I plan to record my next lectures, and maybe post some of them on-line. I will let you know in case, so maybe I can also receive feedback from my readers!

Sharing papers

Long time I do not write anything here. I think it is time to resurrect this space. Probably, from now on I will not write long posts anymore, due to lack of time. However, I will try to write short posts about my research and anything related to it, so that blogging here will become part of my job as a researcher.

Today I want to just announce that I just sent to to ArXiV a draft of the paper that I submitted to ECRTS. Here it is. I apologize in advance for the mistakes that are surely present in this draft.

Why submitting to ArXiV a paper that has not been accepted yet? I think it is a shame that we, as a community (I mean: the real-time systems research community) do not make use of modern technology for sharing our research. I do believe that we many opportunities in front of us, and we do not take advantage of any of them. Maybe because it requires some effort from our part. As a matter of fact, the number of papers on real-time research in ArXiV is ridiculously low.

Therefore, as usual, I decided to start myself with my little contribution. If you want to comment my paper, ask questions, contribute, or anything else, please write your comments below this post. It will really be a pleasure for me to respond to your questions, and also to take criticism.

Here we go!

Parametric Schedulability Analysis of Fixed Priority Real-Time Distributed Systems

Youcheng Sun, Romain Soulat, Giuseppe Lipari, Étienne André, Laurent Fribourg

Parametric analysis is a powerful tool for designing modern embedded systems, because it permits to explore the space of design parameters, and to check the robustness of the system with respect to variations of some uncontrollable variable. In this paper, we address the problem of parametric schedulability analysis of distributed real-time systems scheduled by fixed priority. In particular, we propose two different approaches to parametric analysis: the first one is a novel technique based on classical schedulability analysis, whereas the second approach is based on model checking of Parametric Timed Automata (PTA). The proposed analytic method extends existing sensitivity analysis for single processors to the case of a distributed system, supporting preemptive and non-preemptive scheduling, jitters and unconstrained deadlines. Parametric Timed Automata are used to model all possible behaviours of a distributed system, and therefore it is a necessary and sufficient analysis. Both techniques have been implemented in two software tools, and they have been compared with classical holistic analysis on two meaningful test cases. The results show that the analytic method provides results similar to classical holistic analysis in a very efficient way, whereas the PTA approach is slower but covers the entire space of solutions.

http://arxiv.org/abs/1302.1306

How to increase the Impact Factor of a Journal

I recently set-up my google scholar profile, and I configured it to alert me when any new article cites one of my papers. I did for my personal statistics (you know that I dislike evaluation based only on statistics of citations), because I want to see who is citing my work, and if they are doing it properly.

A few days ago I received an alert reporting that this paper is citing one of my publications. This is a rather strange paper: I think it is a good example of what happens in some academic area, so I decided to share my thoughts in public.

First the facts.

Fact 1

The paper is titled “Performance Analysis of IES Journals using Internet and Text Processing Robots“, and has been presented at The IEEE 37-th Annual Industrial Electronics Conference IECON. The conference is listed in IEEE Xplore, and it is sponsored by the Industrial Electronic Society (IES) of the IEEE.  The paper is mainly a collection of data extracted from IES journals, how many paper, how many citations, impact factor, etc.  The interesting part, however, comes when you look at the References section: it lists 59 references, and except for “The Perl Black book“, and “Learning Perl“, the remaining ones are citations of other papers that have been published in IES journals, magazines or conference proceedings .  Interestingly, most of these references are not cited in the text: indeed one may wonder why references:

[35] K.T. Chau, C.C. Chan, Chunhua Liu, “Overview of Permanent-Magnet Brushless Drives for Electric and Hybrid Electric Vehicles,” IEEE Trans. on Industrial Electronics, vol. 55, no. 6, pp. 2246-2257, June 2008.

or

[53] T. Cucinotta, A. Mancina, G.F. Anastasi, G. Lipari, L. Mangeruca, R. Checcozzo, F. Rusina, “A Real-Time Service-Oriented Architecture for Industrial Automation ,” IEEE Trans. on Industrial Informatics, vol. 5, no. 3, pp. , Aug 2009.

are reported here, since they have nothing to do with the content of the paper. Also, one might wonder why these 57 papers have been selected among the many others published by IES. I think only the authors can answer our curiosity.

It is worth to note that one of the authors of the paper is Editor in Chief of the IEEE Transactions in Industrial Informatics, and was former Editor in Chief of the IEEE Transactions on Industrial Electronics. Both journals are published by the Industrial Electronics Society.

Fact 2

A few weeks ago, we got a paper rejected from the IEEE Transaction on Industrial Informatics of the IES. Ok, probably the paper was not so great after all. However, one of the anonymous referees wrote this review (which I report entirely, typos included, I only emphasised one sentence in bold):

This manuscript already received rejection and the revised version is stilt not on the IEEE Trans. level.

It seems that your manuscript is weak on the current state-of-the-art description, and it does not have enough current journal references. You have placed your findings in the content of conference papers instead of journal papers, which is OK but only for work published on conferences but not in journals. Notice, that out of 31 references only 3 are to the journal papers and this reason alone should be a good reason to reject the manuscript.  These 3 journals:
Real-time Systems
Journal of Systems Architecture
ACM SIGPLAN Notices
are also of questionable quality with low Eigenfactor Score, Article Influence Score, or Impact Factors

If authors are not able to connect their findings to recent journal publications of other authors it could mean:
(1) there is not much recent interest in the subject
(2) authors are not following recent journal literature
Both are good reasons for rejecting the manuscript.

You are probably wondering what our paper was about. Well, it does not matter, since the referee did not bother to write any technical comment to reinforce his suggestion to reject the paper. In fact, this referee plainly suggests that the paper ought to be rejected simply and solely because we did not reference the right journals.

Please, pay attention: he did not say which papers we should cite.  Also, he is saying that there is no interest in the subject because we did not reference the important journals. Or that we are not following recent journal literature, because we did not cite the important journals. Actually, he does not even know if the topics is uninteresting or if we are just stupid: apparently, it does not matter to him.

Comments

So far I just reported the facts, and I believe that the facts speak for themselves. Everybody at this point can make his opinion on what is going on in the IES. However, for those of you that are not aware of how academic publishing works, I think it is worth to spend some more words of personal comments.

In my opinion, it is quite evident that both Fact 1 and Fact 2 are just two “tricks” used to increase the number of citations to journals of the IES. In the first case, by “publishing” a paper whose only purpose seems to be to list references to papers into publications of the IES. In the second case, by “suggesting” future potential submitters to IEEE TII to cite more “good journals”, and implicitly, journals of IES.

The not-so-hidden goal of these tricks is to increase the Impact Factor. The IF index is a measure of a journal performance: the more citations to papers of a journal, the higher is the IF. It is quite clear that one of the goals of the Editor in Chief of any journal is to increase its IF. I can just visualise in my mind the EICs of all the IEEE Transactions sit around a table during their annual meeting, playing the “my IF is bigger then yours” game.

Is the IF a good measure of the quality of a journal? The matter has been debated for a long time. In biology and medical sciences, it is well known that in the past (and still now) the quality of a researcher was measured by the IF of the journals where his papers were published. Certainly, if the IF is built using a lot of tricks like the ones I described above, the correlation between IF and quality becomes weaker.

Are those tricks “legal”? Well, yes, there is nothing illegal going on here. Unethical maybe, but not illegal. However, in the long run, these tricks are potentially devastating for the academic community at large.

In the short run, the situation is win-win for the authors and for the editors. For example, in the first case I should be happy that the authors bothered citing my paper: this citation will contribute to my h-index, and no human being will ever check the more than one thousand citations to my scientific production one by one. This citation has become just one additional number in my batch, and my h-index will go up, thanks to the authors of the strange paper of Fact 1.

As for Fact 2: yes my paper got rejected. But the message is that by randomly citing some additional paper in the right journal, maybe in the future my paper will be accepted, thus contributing to the journal IF; and once it is published there, the more the IF goes up, the better it is for me. As for the Editorial board, they will see the IF increase, and they can go to the annual meeting and show good numbers to their colleagues. It looks like a good deal, after all.

Then, if everybody is happy, why that gut feeling?

If academic world is like this, it is not the place I want to be. I decided to pursue an academic career for a good reason: it was not for money (my Italian friends know better), and not for glory. It was for fun. I have fun doing research and teaching, and I am paid for that. But I don’t want to work in an environment where these tricks are commonplace, because that takes away part of the fun, and in the long run will take away all the fun.

Therefore, these are my decisions:

- I shall not review any paper neither for IEEE Transaction on Industrial Informatics, nor for any other publications of IES.

- I shall not submit any paper to any journal of IES.

And, of course, if I convinced you of my reasons, I invite you to do the same.

One final note.

Someone may think I wrote this post as a revenge for the rejection of my paper. Actually, I did not think of taking any action until I discovered Fact 1. Also, I already have enough publications in my CV, and I am in no rush to publish.

Therefore, we just resubmitted our paper to another journal, which has a much lower IF, but whose EIC does not play any “trick”.