summaryrefslogtreecommitdiffstats
path: root/doc/langref.tex
diff options
context:
space:
mode:
authorPrzemyslaw Pawelczyk <przemyslaw@pawelczyk.it>2009-06-18 01:50:31 +0200
committerJosh Stone <jistone@redhat.com>2009-06-17 17:43:48 -0700
commit7f12f9a3f6aeb2452acedced5a54c66c4a19382b (patch)
tree491f2b5f26cc83319d43c2ef45417b446448c276 /doc/langref.tex
parent44b73c9d467fe0383d33dce5f1217e023f3b203b (diff)
downloadsystemtap-steved-7f12f9a3f6aeb2452acedced5a54c66c4a19382b.tar.gz
systemtap-steved-7f12f9a3f6aeb2452acedced5a54c66c4a19382b.tar.xz
systemtap-steved-7f12f9a3f6aeb2452acedced5a54c66c4a19382b.zip
Fix tokenize function and test.
Previous implementation was error-prone, because allowed returning empty tokens (mimiced strsep()), which is fine if there is a NULL semantic. Unfortunately SystemTap doesn't provide it in scripts and has only blank string (""), therefore testing against it was misleading. The solution is to return only non-empty tokens (mimic strtok()). * tapset/string.stp: Fix tokenize. * testsuite/systemtap.string/tokenize.stp: Improve and add case with more than one delimiter in the delim string. * testsuite/systemtap.string/tokenize.exp: Ditto. * stapfuncs.3stap.in: Update tokenize description. * doc/langref.tex: Ditto. Signed-off-by: Josh Stone <jistone@redhat.com>
Diffstat (limited to 'doc/langref.tex')
-rw-r--r--doc/langref.tex4
1 files changed, 2 insertions, 2 deletions
diff --git a/doc/langref.tex b/doc/langref.tex
index 5aefa278..5a149d19 100644
--- a/doc/langref.tex
+++ b/doc/langref.tex
@@ -3160,8 +3160,8 @@ General syntax:
tokenize:string (input:string, delim:string)
\end{verbatim}
\end{vindent}
-This function returns the next token in the given input string, where
-the tokens are delimited by one of the characters in the delim string.
+This function returns the next non-empty token in the given input string,
+where the tokens are delimited by characters in the delim string.
If the input string is non-NULL, it returns the first token. If the input string
is NULL, it returns the next token in the string passed in the previous call
to tokenize. If no delimiter is found, the entire remaining input string