summaryrefslogtreecommitdiffstats
path: root/evaluatingfosscontributions.tex
blob: 354d6cf9f58da5a2405d35c5b41f3c8d32459835 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
\begin{english}
\secstar{Evaluating FOSS Contributions}
\vskip 2pt
Counting FOSS contributions for research grants actually threw open
a new area of investigation as such, I guess. It is about evaluation of the novelty
factor of contribution to FOSS project. As we know, in any field of research,
evaluation metrics is a big area of investigation and people come up with new
distances and measures every now and then (even we are in the middle of
such an effort for OCR). 

Normally novelty factor is decided based on where
the related paper is published, how good it is explored and how good is the
theoretical foundation for the papers. The interesting thing is, many FOSS
projects cite papers published (and some regularly publish papers) in main
stream journals to get acknowledged for their novelty and to ensure the novelty of
the algorithms they use. Many times author's notes are used in FOSS projects to
report on real time usage, easier adaptation, etc. Many plugins in GIMP are
developments of European university PhDs. A famous one is, a resynthesizer plugin.

However, the idea of evaluating the novelty factor solely considering the contribution
to the project requires a new metric to evaluate itself too. The normal factors
required to assert novelty in a collaborative project are very
much in line with the normal lab, professors, conferences and peer reviews mechanism.
The only difference and a crucial missing factor will be a published paper. I would
say inaccessible too, since a paper costs 5-10 USD depending on the publisher,
conference and journal. Peer review of the technique,
its implementation in real time projects, and various blogs and log entries are
available for documentation.

Peer reviews of the subject experts happen very well in discussions
over IRC and mailing lists (most of which are archived). Different perspectives
from theoretical foundations to practical implementation issues are discussed
in a single go there (depends on the project too). But this varies from project
to project. A project or a contribution which generates a bigger discussion
along with being criticized and evaluated rigorously should get more points (very similar
to classification of conferences and journals to A+, A, etc.).  We can even
have a FOSS project classification depending on how much discussions, scrutiny
and perspectives are evaluated before new features are incorporated into the
existing system.

Another thing that we can draw parallel is the criterion used
by conferences and journals for accepting a paper and the peer review system
of the projects. These are some ideas that got into my mind, when thinking about a
systematic evaluation metric for novelty in FOSS contribution. A metric
and a system like this will help to counter so many software patents too, I guess.
There is Special Interest Group of ACM for Computer Science Education.
They have a special section on FOSS. I haven't seen the proceedings and as such, I
don't know what all they discussed. But it will be good to check these
formal forums and their proceedings to look for prior ideas on the subject.
I don't have access to ACM libraries here. If we can put some time and
thought into this, we can develop a draft and then may be start an open
discussion too. This will help FOSS projects to avoid depending on non-
free published items for claiming the novelty factor due for them (since
Santhosh is not interested in publishing, he is not recognized by anyone
in academia of Indian Language Research though his works are very popular).


I believe what ever I wrote was taking it apriori that acceptance of ideas
to a project is enough for validation. My problem was how do we evaluate
the novelty factor. (we know there is a novelty factor, but how to scale it.)
Then later on turning this novelty factor itself to rate the projects.
Now projects interact with academia in a weird way. We should find a middle
ground, where even academic contributions like submitting a paper to A+
journal is counted similar to adding the same algorithm with all its detail
to a project with A+ novelty factor. People might not accept it as such,
and at first, there will be double contributions, but with enough campaigning
and ensuring that the evaluation framework is strong enough and thus reliable,
we can make some progress.

It will also work as a counter measure to the now existing
monopolistic attitude of IEEE, ACM etc. In case of academic publishing, the
only thing that I worry about are the arguments against the review of documentation 
(like how implementation of something in one project will ensure its
reimplementation capability in different scenario if the documentation
is not aimed at that). Capability to re-implement and produce results for
a different set of users for a different set of purposes should also carry
weightage (like how much help does this implementation give on doing that).
That usually does not come under the aims of the project and they don't care.
But the ones who are doing the contribution and waiting for it to be counted
towards their Degree or Salary should be aware of it and do it.

Collaborative publishing can be very well used and an example of wikipedia can support 
the claim. Acceptance by user community is a validation of novelty. But how the contribution is accepted may not always be a measure of novelty.
Some contributions, very novel, might not trigger much response; some
trivial ones might trigger huge response. So, in order to evaluate novelty and the
original contribution, there should be a mechanism which in turn projects
can use to count or evaluate their innovativeness or novelty factor.
This, along with a must do documentation of the contribution in a collaborative
peer reviewed wiki kind of system, should ensure freedom of the knowledge
generated out of the process.

It is not just a matter of accepting FOSS to mainstream academic research, but 
more or less bringing back the idea of freedom to the academia. We should prepare 
a draft framework. (I don't have much of an idea on how to prepare it.) Then should try 
evaluating some recent contributions to some projects on basis of this framework (we can 
use SILPA as one of the sample projects). Then present this to the world as a method
to count novelty in collaborative projects without using the normal way of
status of publication. All FOSS projects maintained by universities or research
organizations cite their publications to show novelty.
\end{english}
\newpage