summaryrefslogtreecommitdiffstats
path: root/evaluatingfosscontributions.tex
blob: cc9496f340f5e0139424bff6f7159d9ee7603d0a (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
\section*{Evaluating FOSS Contributions}
\vskip 2pt

Counting FOSS contributions towards research grants actually threw open
a new area of investigation as such I guess. Its about evaluation of novelty
factor of contribution to FOSS project. As we know, in any field of research,
evaluation metrics is a big area of investigation and people come up with new
distances and measures every now and then(even we are in the middle of
such an effort for OCR). Normally novelty factor is decided based on where
the related paper is published, how good it is explored and how good is the
theoretical foundation for the papers.  The interesting thing is, many FOSS
projects cite papers published(and some regularly publish papers) in main
stream journals to get acknowledged for their novelty and to ensure the novelty of
the algorithms they use. Many times authors note use in FOSS projects to
report on real time usage, easier adaptation etc.(many plugins in GIMP are
developments of European univ. PhDs, famous one is, a resynthesizer plugin).
But the idea of evaluating the novelty factor solely considering the contribution
to the project require a new metric to evaluate it too. The normal factors
required to assert novelty is all present in a collaborative project very
much in line with the normal lab, professor, conferences, peer reviews mechanism.
The only difference and a crucial missing factor will a published(i would
say inaccessible too, a paper costs 5-10 USD depending on the publisher,
conference and journal) covering paper.  Peer review of the technique,
its implementation in real time projects and required documentations as
part of the project documentation and various blog and log entries are
available. Peer review of subject experts happen very well in discussions
over IRC and mailing lists(most of which are archived). Different perspectives
from theoretical foundations to practical implementation issues were discussed
in a single go there(depends on the project too). But this varies from project
to project. A project or a contribution which generates a bigger discussion
and criticized and evaluated rigorously should get more points(very similar
to classification of conferences and journals to A +,A,etc.).  We can even
have a FOSS project classification depending on how much discussions, scrutiny
and perspectives are evaluated before new features are incorporated into the
existing system. Also another thing we can draw parallel is the criterion used
by conferences and journals for accepting a paper and the peer review system
of the projects.  These are some ideas got into my mind, when thought about a
systematic evaluation metric for novelty in FOSS contribution. A metric
and system like this will help to counter so many software patents too i guess.
There is Special Interest Group of ACM for Computer Science Education.
They have a special section on FOSS. I haven't seen the proceedings so
don't know what all they discussed. But it will be good to check these
formal forums and their proceedings to look for prior ideas on the subject.
I don't have access to ACM libraries here.  If we can put some time and
thought into this, we can develop a draft and then may be start an open
discussion too. This will help FOSS projects to avoid depending on non
free published items for claiming the novelty factor due for them(since
Santhosh is not interested in publishing, he is not recognized by anyone
in  academia of Indian Language Research though his works are very popular).


I think what ever i wrote was taking it apriori that acceptance of ideas
to a project is enough for validation. My problem was how do we evaluate
the novelty factor(we know there is novelty factor, but how to scale it).
Then later on turning this novelty factor itself to rate the projects.
Now projects interact with academia in a weird way, we should find a middle
ground, where even academic contributions like submitting a paper to A+
journal is counted similar to adding the same algorithm with all its detail
to a project with A+ novelty factor. People might not accept it as such,
and at first, there will be double contributions, but with enough campaigning
and ensuring that the evaluation framework is strong enough and thus reliable,
we can make some progress. It will also work as a counter measure to now
monopolistic attitude of IEEE,ACM etc. in case of academic publishing.
Only thing i worry about is the arguments against the review of documentation
(like how implementation of something in one project will ensure its
reimplementation capability in different scenario if the documentation
is not aimed at that). Capability to reimplement and produce results for
a different set of users for a different set of purposes should also carry
weightage(like how much help does this implementation give on doing that).
That usually doesn't come under the aims of project and they don't care,
but the ones who are doing contribution and waiting for it to be counted
towards their Degree or Salary should be aware and do it. Collaborative
publishing can be very well used and example of wikipedia can support the claim.
Acceptance by user community is a validation of novelty. But how the detail
or contribution is accepted may not always be a measure of novelty
(some contributions, very novel, might not trigger much response, some
trivial ones might trigger huge response). So to evaluate novelty and the
original contribution, there should be a mechanism which in turn projects
can use to count or evaluate their innovativeness or novelty factor.
This along with a must do documentation of the contribution in a collaborative
peer reviewed wiki kind of system should ensure freedom of the knowledge
generated out of the process. It is a matter of not just accepting FOSS to
mainstream academic research, but more or less bringing back the idea of
freedom to the academia.  Should prepare a draft framework(i don't have
much idea on how to prepare it). Then should try evaluating some recent
contributions to some projects on basis of this framework(we can use SILPA
as one of the sample projects). Then present this to the world as a method
to count novelty in collaborative projects without using the normal way of
stat of publication(all FOSS projects maintained by universities or research
organizations cite their publications to show novelty).
\newpage