| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
It appears that the fix for #5252 wasn't complete, and class, nodes and
definition were still using the current lexer line number instead of
the line number of the class/define/node token.
This combined with some missing comments stack pushing/pop on parenthesis
prevented puppetdoc to correctly get the documentation of some class (including
parametrized ones).
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
|
| |
This was a regression, not covered by a test; previously the string
"foo\
bar"
would be interpreded as "foobar" but this was changed to "foo\\\nbar" in
2.6.x with my string interpolation refactor. This change restores the
behaviour.
|
|
|
|
|
|
|
|
| |
Part of the ongoing refinement / cleanup of the string interpolation semantics.
When scanning for an unescaped string terminator we now also allow an 0 or more
pairs of backslashes (that is, escaped backslashes) before the terminator.
Thanks to Jacob for the test I should have added.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the rest of the change for #4303; James and I discussed various
ways the solution to that ticket needed to be extended but, as neither of us
committed code, nothing changed. This is the least implact extension, which
mimics the behaviour of prior versions.
It leaves open the question: should '\\x' start with a single or double
backslash? If, as now, '\\x' starts with a double backslash (i.e. single quote
is the only escapable characterin single quoted strings) a string ending in a
backslash can not be represented in a single quoted string.
|
|
|
|
|
|
|
| |
The new syntax for instantiating parameterized classes was confusing the
lexer's notion of namespaces.
This is a simple fix to prevent that syntax from polluting the
namespaces.
|
|
|
|
|
| |
Single quoted used to allow escape on single quotes and pass all other
characters through without comment; now the do again.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a modification of the Nick/Jesse/Matt patch, retaining their tests and
the analysis of the problem but reversing the implementation direction of the
solution.
Rather than trying to make the already somewhat brittle slurpstring smarter,
which requires telling it what following strings will be accepted by the caller
with a zero-width-lookahead negation of the regular expression used to extract
a variable name, this patch keeps that responsibility in the caller where it
belongs.
The caller (tokenize_interpolated_string) now checks to see if it got a
variable name _before_ emitting a variable token; if it got one, it proceeds
normally, but if it didn't it simply tries again from that point in the string
(accumulating the false match as a prefix). This change actually simplifies
the logic of tokenize_interpolated_string somewhat.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replaced 106806 occurances of ^( +)(.*$) with
The ruby community almost universally (i.e. everyone but Luke, Markus, and the other eleven people
who learned ruby in the 1900s) uses two-space indentation.
3 Examples:
The code:
end
# Tell getopt which arguments are valid
def test_get_getopt_args
element = Setting.new :name => "foo", :desc => "anything", :settings => Puppet::Util::Settings.new
assert_equal([["--foo", GetoptLong::REQUIRED_ARGUMENT]], element.getopt_args, "Did not produce appropriate getopt args")
becomes:
end
# Tell getopt which arguments are valid
def test_get_getopt_args
element = Setting.new :name => "foo", :desc => "anything", :settings => Puppet::Util::Settings.new
assert_equal([["--foo", GetoptLong::REQUIRED_ARGUMENT]], element.getopt_args, "Did not produce appropriate getopt args")
The code:
assert_equal(str, val)
assert_instance_of(Float, result)
end
# Now test it with a passed object
becomes:
assert_equal(str, val)
assert_instance_of(Float, result)
end
# Now test it with a passed object
The code:
end
assert_nothing_raised do
klass[:Yay] = "boo"
klass["Cool"] = :yayness
end
becomes:
end
assert_nothing_raised do
klass[:Yay] = "boo"
klass["Cool"] = :yayness
end
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Replaced 704 occurances of (.*)\b([a-z_]+)\(\) with \1\2
3 Examples:
The code:
ctx = OpenSSL::SSL::SSLContext.new()
becomes:
ctx = OpenSSL::SSL::SSLContext.new
The code:
skip()
becomes:
skip
The code:
path = tempfile()
becomes:
path = tempfile
* Replaced 31 occurances of ^( *)end *#.* with \1end
3 Examples:
The code:
becomes:
The code:
end # Dir.foreach
becomes:
end
The code:
end # def
becomes:
end
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replaced 583 occurances of
(DEF)
(LINES)
return (.*)
end
with
3 Examples:
The code:
def consolidate_failures(failed)
filters = Hash.new { |h,k| h[k] = [] }
failed.each do |spec, failed_trace|
if f = test_files_for(failed).find { |f| failed_trace =~ Regexp.new(f) }
filters[f] << spec
break
end
end
return filters
end
becomes:
def consolidate_failures(failed)
filters = Hash.new { |h,k| h[k] = [] }
failed.each do |spec, failed_trace|
if f = test_files_for(failed).find { |f| failed_trace =~ Regexp.new(f) }
filters[f] << spec
break
end
end
filters
end
The code:
def retrieve
return_value = super
return_value = return_value[0] if return_value && return_value.is_a?(Array)
return return_value
end
becomes:
def retrieve
return_value = super
return_value = return_value[0] if return_value && return_value.is_a?(Array)
return_value
end
The code:
def fake_fstab
os = Facter['operatingsystem']
if os == "Solaris"
name = "solaris.fstab"
elsif os == "FreeBSD"
name = "freebsd.fstab"
else
# Catchall for other fstabs
name = "linux.fstab"
end
oldpath = @provider_class.default_target
return fakefile(File::join("data/types/mount", name))
end
becomes:
def fake_fstab
os = Facter['operatingsystem']
if os == "Solaris"
name = "solaris.fstab"
elsif os == "FreeBSD"
name = "freebsd.fstab"
else
# Catchall for other fstabs
name = "linux.fstab"
end
oldpath = @provider_class.default_target
fakefile(File::join("data/types/mount", name))
end
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Replaced 83 occurances of
(.*)" *[+] *([$@]?[\w_0-9.:]+?)(.to_s\b)?(?! *[*(%\w_0-9.:{\[])
with
\1#{\2}"
3 Examples:
The code:
puts "PUPPET " + status + ": " + process + ", " + state
becomes:
puts "PUPPET " + status + ": " + process + ", #{state}"
The code:
puts "PUPPET " + status + ": #{process}" + ", #{state}"
becomes:
puts "PUPPET #{status}" + ": #{process}" + ", #{state}"
The code:
}.compact.join( "\n" ) + "\n" + t + "]\n"
becomes:
}.compact.join( "\n" ) + "\n#{t}" + "]\n"
* Replaced 21 occurances of (.*)" *[+] *" with \1
3 Examples:
The code:
puts "PUPPET #{status}" + ": #{process}" + ", #{state}"
becomes:
puts "PUPPET #{status}" + ": #{process}, #{state}"
The code:
puts "PUPPET #{status}" + ": #{process}, #{state}"
becomes:
puts "PUPPET #{status}: #{process}, #{state}"
The code:
res = self.class.name + ": #{@name}" + "\n"
becomes:
res = self.class.name + ": #{@name}\n"
* Don't use string concatenation to split lines unless they would be very long.
Replaced 11 occurances of
(.*)(['"]) *[+]
*(['"])(.*)
with
3 Examples:
The code:
o.define_head "The check_puppet Nagios plug-in checks that specified " +
"Puppet process is running and the state file is no " +
becomes:
o.define_head "The check_puppet Nagios plug-in checks that specified Puppet process is running and the state file is no " +
The code:
o.separator "Mandatory arguments to long options are mandatory for " +
"short options too."
becomes:
o.separator "Mandatory arguments to long options are mandatory for short options too."
The code:
o.define_head "The check_puppet Nagios plug-in checks that specified Puppet process is running and the state file is no " +
"older than specified interval."
becomes:
o.define_head "The check_puppet Nagios plug-in checks that specified Puppet process is running and the state file is no older than specified interval."
* Replaced no occurances of do (.*?) end with {\1}
* Replaced 1488 occurances of
"([^"\n]*%s[^"\n]*)" *% *(.+?)(?=$| *\b(do|if|while|until|unless|#)\b)
with
20 Examples:
The code:
args[0].split(/\./).map do |s| "dc=%s"%[s] end.join(",")
becomes:
args[0].split(/\./).map do |s| "dc=#{s}" end.join(",")
The code:
puts "%s" % Puppet.version
becomes:
puts "#{Puppet.version}"
The code:
raise "Could not find information for %s" % node
becomes:
raise "Could not find information for #{node}"
The code:
raise Puppet::Error, "Cannot create %s: basedir %s is a file" % [dir, File.join(path)]
becomes:
raise Puppet::Error, "Cannot create #{dir}: basedir #{File.join(path)} is a file"
The code:
Puppet.err "Could not run %s: %s" % [client_class, detail]
becomes:
Puppet.err "Could not run #{client_class}: #{detail}"
The code:
raise "Could not find handler for %s" % arg
becomes:
raise "Could not find handler for #{arg}"
The code:
Puppet.err "Will not start without authorization file %s" % Puppet[:authconfig]
becomes:
Puppet.err "Will not start without authorization file #{Puppet[:authconfig]}"
The code:
raise Puppet::Error, "Could not deserialize catalog from pson: %s" % detail
becomes:
raise Puppet::Error, "Could not deserialize catalog from pson: #{detail}"
The code:
raise "Could not find facts for %s" % Puppet[:certname]
becomes:
raise "Could not find facts for #{Puppet[:certname]}"
The code:
raise ArgumentError, "%s is not readable" % path
becomes:
raise ArgumentError, "#{path} is not readable"
The code:
raise ArgumentError, "Invalid handler %s" % name
becomes:
raise ArgumentError, "Invalid handler #{name}"
The code:
debug "Executing '%s' in zone %s with '%s'" % [command, @resource[:name], str]
becomes:
debug "Executing '#{command}' in zone #{@resource[:name]} with '#{str}'"
The code:
raise Puppet::Error, "unknown cert type '%s'" % hash[:type]
becomes:
raise Puppet::Error, "unknown cert type '#{hash[:type]}'"
The code:
Puppet.info "Creating a new certificate request for %s" % Puppet[:certname]
becomes:
Puppet.info "Creating a new certificate request for #{Puppet[:certname]}"
The code:
"Cannot create alias %s: object already exists" % [name]
becomes:
"Cannot create alias #{name}: object already exists"
The code:
return "replacing from source %s with contents %s" % [metadata.source, metadata.checksum]
becomes:
return "replacing from source #{metadata.source} with contents #{metadata.checksum}"
The code:
it "should have a %s parameter" % param do
becomes:
it "should have a #{param} parameter" do
The code:
describe "when registring '%s' messages" % log do
becomes:
describe "when registring '#{log}' messages" do
The code:
paths = %w{a b c d e f g h}.collect { |l| "/tmp/iteration%stest" % l }
becomes:
paths = %w{a b c d e f g h}.collect { |l| "/tmp/iteration#{l}test" }
The code:
assert_raise(Puppet::Error, "Check '%s' did not fail on false" % check) do
becomes:
assert_raise(Puppet::Error, "Check '#{check}' did not fail on false") do
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Replaced 163 occurances of
defined\? +([@a-zA-Z_.0-9?=]+)
with
defined?(\1)
This makes detecting subsequent patterns easier.
3 Examples:
The code:
if ! defined? @parse_config
becomes:
if ! defined?(@parse_config)
The code:
return @option_parser if defined? @option_parser
becomes:
return @option_parser if defined?(@option_parser)
The code:
if defined? @local and @local
becomes:
if defined?(@local) and @local
* Eliminate trailing spaces.
Replaced 428 occurances of ^(.*?) +$ with \1
1 file was skipped.
test/ral/providers/host/parsed.rb because 0
* Replace leading tabs with an appropriate number of spaces.
Replaced 306 occurances of ^(\t+)(.*) with
Tabs are not consistently expanded in all environments.
* Don't arbitrarily wrap on sprintf (%) operator.
Replaced 143 occurances of
(.*['"] *%)
+(.*)
with
Splitting the line does nothing to aid clarity and hinders further refactorings.
3 Examples:
The code:
raise Puppet::Error, "Cannot create %s: basedir %s is a file" %
[dir, File.join(path)]
becomes:
raise Puppet::Error, "Cannot create %s: basedir %s is a file" % [dir, File.join(path)]
The code:
Puppet.err "Will not start without authorization file %s" %
Puppet[:authconfig]
becomes:
Puppet.err "Will not start without authorization file %s" % Puppet[:authconfig]
The code:
$stderr.puts "Could not find host for PID %s with status %s" %
[pid, $?.exitstatus]
becomes:
$stderr.puts "Could not find host for PID %s with status %s" % [pid, $?.exitstatus]
* Don't break short arrays/parameter list in two.
Replaced 228 occurances of
(.*)
+(.*)
with
3 Examples:
The code:
puts @format.wrap(type.provider(prov).doc,
:indent => 4, :scrub => true)
becomes:
puts @format.wrap(type.provider(prov).doc, :indent => 4, :scrub => true)
The code:
assert(FileTest.exists?(daily),
"Did not make daily graph for %s" % type)
becomes:
assert(FileTest.exists?(daily), "Did not make daily graph for %s" % type)
The code:
assert(prov.target_object(:first).read !~ /^notdisk/,
"Did not remove thing from disk")
becomes:
assert(prov.target_object(:first).read !~ /^notdisk/, "Did not remove thing from disk")
* If arguments must wrap, treat them all equally
Replaced 510 occurances of
lines ending in things like ...(foo, or ...(bar(1,3),
with
\1
\2
3 Examples:
The code:
midscope.to_hash(false),
becomes:
assert_equal(
The code:
botscope.to_hash(true),
becomes:
# bottomscope, then checking that we see the right stuff.
The code:
:path => link,
becomes:
* Replaced 4516 occurances of ^( *)(.*) with
The present code base is supposed to use four-space indentation. In some places we failed
to maintain that standard. These should be fixed regardless of the 2 vs. 4 space question.
15 Examples:
The code:
def run_comp(cmd)
puts cmd
results = []
old_sync = $stdout.sync
$stdout.sync = true
line = []
begin
open("| #{cmd}", "r") do |f|
until f.eof? do
c = f.getc
becomes:
def run_comp(cmd)
puts cmd
results = []
old_sync = $stdout.sync
$stdout.sync = true
line = []
begin
open("| #{cmd}", "r") do |f|
until f.eof? do
c = f.getc
The code:
s.gsub!(/.{4}/n, '\\\\u\&')
}
string.force_encoding(Encoding::UTF_8)
string
rescue Iconv::Failure => e
raise GeneratorError, "Caught #{e.class}: #{e}"
end
else
def utf8_to_pson(string) # :nodoc:
string = string.gsub(/["\\\x0-\x1f]/) { MAP[$&] }
string.gsub!(/(
becomes:
s.gsub!(/.{4}/n, '\\\\u\&')
}
string.force_encoding(Encoding::UTF_8)
string
rescue Iconv::Failure => e
raise GeneratorError, "Caught #{e.class}: #{e}"
end
else
def utf8_to_pson(string) # :nodoc:
string = string.gsub(/["\\\x0-\x1f]/) { MAP[$&] }
string.gsub!(/(
The code:
end
}
rvalues: rvalue
| rvalues comma rvalue {
if val[0].instance_of?(AST::ASTArray)
result = val[0].push(val[2])
else
result = ast AST::ASTArray, :children => [val[0],val[2]]
end
}
becomes:
end
}
rvalues: rvalue
| rvalues comma rvalue {
if val[0].instance_of?(AST::ASTArray)
result = val[0].push(val[2])
else
result = ast AST::ASTArray, :children => [val[0],val[2]]
end
}
The code:
#passwdproc = proc { @password }
keytext = @key.export(
OpenSSL::Cipher::DES.new(:EDE3, :CBC),
@password
)
File.open(@keyfile, "w", 0400) { |f|
f << keytext
}
becomes:
# passwdproc = proc { @password }
keytext = @key.export(
OpenSSL::Cipher::DES.new(:EDE3, :CBC),
@password
)
File.open(@keyfile, "w", 0400) { |f|
f << keytext
}
The code:
end
def to_manifest
"%s { '%s':\n%s\n}" % [self.type.to_s, self.name,
@params.collect { |p, v|
if v.is_a? Array
" #{p} => [\'#{v.join("','")}\']"
else
" #{p} => \'#{v}\'"
end
}.join(",\n")
becomes:
end
def to_manifest
"%s { '%s':\n%s\n}" % [self.type.to_s, self.name,
@params.collect { |p, v|
if v.is_a? Array
" #{p} => [\'#{v.join("','")}\']"
else
" #{p} => \'#{v}\'"
end
}.join(",\n")
The code:
via the augeas tool.
Requires:
- augeas to be installed (http://www.augeas.net)
- ruby-augeas bindings
Sample usage with a string::
augeas{\"test1\" :
context => \"/files/etc/sysconfig/firstboot\",
changes => \"set RUN_FIRSTBOOT YES\",
becomes:
via the augeas tool.
Requires:
- augeas to be installed (http://www.augeas.net)
- ruby-augeas bindings
Sample usage with a string::
augeas{\"test1\" :
context => \"/files/etc/sysconfig/firstboot\",
changes => \"set RUN_FIRSTBOOT YES\",
The code:
names.should_not be_include("root")
end
describe "when generating a purgeable resource" do
it "should be included in the generated resources" do
Puppet::Type.type(:host).stubs(:instances).returns [@purgeable_resource]
@resources.generate.collect { |r| r.ref }.should include(@purgeable_resource.ref)
end
end
describe "when the instance's do not have an ensure property" do
becomes:
names.should_not be_include("root")
end
describe "when generating a purgeable resource" do
it "should be included in the generated resources" do
Puppet::Type.type(:host).stubs(:instances).returns [@purgeable_resource]
@resources.generate.collect { |r| r.ref }.should include(@purgeable_resource.ref)
end
end
describe "when the instance's do not have an ensure property" do
The code:
describe "when the instance's do not have an ensure property" do
it "should not be included in the generated resources" do
@no_ensure_resource = Puppet::Type.type(:exec).new(:name => '/usr/bin/env echo')
Puppet::Type.type(:host).stubs(:instances).returns [@no_ensure_resource]
@resources.generate.collect { |r| r.ref }.should_not include(@no_ensure_resource.ref)
end
end
describe "when the instance's ensure property does not accept absent" do
it "should not be included in the generated resources" do
@no_absent_resource = Puppet::Type.type(:service).new(:name => 'foobar')
becomes:
describe "when the instance's do not have an ensure property" do
it "should not be included in the generated resources" do
@no_ensure_resource = Puppet::Type.type(:exec).new(:name => '/usr/bin/env echo')
Puppet::Type.type(:host).stubs(:instances).returns [@no_ensure_resource]
@resources.generate.collect { |r| r.ref }.should_not include(@no_ensure_resource.ref)
end
end
describe "when the instance's ensure property does not accept absent" do
it "should not be included in the generated resources" do
@no_absent_resource = Puppet::Type.type(:service).new(:name => 'foobar')
The code:
func = nil
assert_nothing_raised do
func = Puppet::Parser::AST::Function.new(
:name => "template",
:ftype => :rvalue,
:arguments => AST::ASTArray.new(
:children => [stringobj(template)]
)
becomes:
func = nil
assert_nothing_raised do
func = Puppet::Parser::AST::Function.new(
:name => "template",
:ftype => :rvalue,
:arguments => AST::ASTArray.new(
:children => [stringobj(template)]
)
The code:
assert(
@store.allowed?("hostname.madstop.com", "192.168.1.50"),
"hostname not allowed")
assert(
! @store.allowed?("name.sub.madstop.com", "192.168.0.50"),
"subname name allowed")
becomes:
assert(
@store.allowed?("hostname.madstop.com", "192.168.1.50"),
"hostname not allowed")
assert(
! @store.allowed?("name.sub.madstop.com", "192.168.0.50"),
"subname name allowed")
The code:
assert_nothing_raised {
server = Puppet::Network::Handler.fileserver.new(
:Local => true,
:Config => false
)
}
becomes:
assert_nothing_raised {
server = Puppet::Network::Handler.fileserver.new(
:Local => true,
:Config => false
)
}
The code:
'yay',
{ :failonfail => false,
:uid => @user.uid,
:gid => @user.gid }
).returns('output')
output = Puppet::Util::SUIDManager.run_and_capture 'yay',
@user.uid,
@user.gid
becomes:
'yay',
{ :failonfail => false,
:uid => @user.uid,
:gid => @user.gid }
).returns('output')
output = Puppet::Util::SUIDManager.run_and_capture 'yay',
@user.uid,
@user.gid
The code:
).times(1)
pkg.provider.expects(
:aptget
).with(
'-y',
'-q',
'remove',
'faff'
becomes:
).times(1)
pkg.provider.expects(
:aptget
).with(
'-y',
'-q',
'remove',
'faff'
The code:
johnny one two
billy three four\n"
# Just parse and generate, to make sure it's isomorphic.
assert_nothing_raised do
assert_equal(text, @parser.to_file(@parser.parse(text)),
"parsing was not isomorphic")
end
end
def test_valid_attrs
becomes:
johnny one two
billy three four\n"
# Just parse and generate, to make sure it's isomorphic.
assert_nothing_raised do
assert_equal(text, @parser.to_file(@parser.parse(text)),
"parsing was not isomorphic")
end
end
def test_valid_attrs
The code:
"testing",
:onboolean => [true, "An on bool"],
:string => ["a string", "A string arg"]
)
result = []
should = []
assert_nothing_raised("Add args failed") do
@config.addargs(result)
end
@config.each do |name, element|
becomes:
"testing",
:onboolean => [true, "An on bool"],
:string => ["a string", "A string arg"]
)
result = []
should = []
assert_nothing_raised("Add args failed") do
@config.addargs(result)
end
@config.each do |name, element|
|
|
|
|
|
|
|
|
| |
RDoc's parser produces errors on this sort of statement:
def (variable).method
This patch wraps our occurances of those definitions with comments that
suspend RDoc parsing.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This operator allows to find if the left operand is in the right one.
The left operand must be resort to a string, but the right operand can be:
* a string
* an array
* a hash (the search is done on the keys)
This syntax can be used in any place where an expression is supported.
Syntax:
$eatme = 'eat'
if $eatme in ['ate', 'eat'] {
...
}
$value = 'beat generation'
if 'eat' in $value {
notice("on the road")
}
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
| |
Thanks to Alan Barrett for the patch
|
|
|
|
|
|
|
|
|
|
| |
"${myclass::var}" was lexed as a CLASSNAME instead of a VARIABLE token,
giving an error while parsing because a rvalue can't be a bare CLASSNAME
token.
This commit (based of Brice's) fixes it by restricting the contexts in which
the CLASSNAME and CLASSREF tokens are acceptable, analagous with the handling
for NAME tokens.
|
|
|
|
|
|
|
|
|
|
|
| |
"${myclass::var}" was lexed as a CLASSNAME instead of a VARIABLE token,
giving an error while parsing because a rvalue can't be a bare CLASSNAME
token.
This patch fixes the issue by making VARIABLE lexing higher priority than
CLASSNAME.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
You can now specify relationships directly in the language:
File[/foo] -> Service[bar]
Specifies a normal dependency while:
File[/foo] ~> Service[bar]
Specifies a subscription.
You can also do relationship chaining, specifying multiple
relationships on a single line:
File[/foo] -> Package[baz] -> Service[bar]
Note that while it's confusing, you don't have to have all
of the arrows be the same direction:
File[/foo] -> Service[bar] <~ Package[baz]
This can provide some succinctness at the cost of readability.
You can also specify full resources, rather than just
resource refs:
file { "/foo": ensure => present } -> package { bar: ensure => installed }
But wait! There's more! You can also specify a subscription on either side
of the relationship marker:
yumrepo { foo: .... }
package { bar: provider => yum, ... }
Yumrepo <| |> -> Package <| provider == yum |>
This, finally, provides easy many to many relationships in Puppet, but it also opens
the door to massive dependency cycles. This last feature is a very powerful stick,
and you can considerably hurt yourself with it.
Signed-off-by: Luke Kanies <luke@puppetlabs.com>
|
|
|
|
|
|
| |
It's about 10x faster to read the whole file than to read each line and
concatenate them (actually, it's O(n) vs. O(n^2), so the exact speedup
depends on the file size).
|
|
|
|
|
|
|
|
|
|
|
| |
From email:
Some of the errors I needed to track down were actually coming from my
string interpolation branch:
* I wasn't handling "Foo ${1} bar" as a regexp back reference (and I don't like it, but hey)
* I wasn't warning about & passing on the "unneeded" backslash in strings like 'foo\"bar'
* I fumbled part of the conflict resolution with Brice's hash patch.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch moves the syntactic aspects of string interpolation up
into the lexer/parser phase, preparatory to moving the semantic
portions down to the as yet unnamed futures resolution phase.
This is an enabling move, designed to allow:
* Futures resolution in and between interpolated strings
* Interpolation of hash elements into strings
* Removal of certain order-dependent paths
* Further modularization of the lexer/parser
The key change is switching from viewing strings with interpolation
as single lexical entities (which await later special case processing)
to viewing them as formulas for constructing strings, with the internal
structure of the string exposed by the parser.
Thus a string like:
"Hello $name, are you enjoying ${language_feature}?"
internally becomes something like:
concat("Hello ",$name,", are you enjoying ",$language_feature,"?")
where "concat" is an internal string concatenation function.
A few test cases to show the user observable effects of this change:
notice("string with ${'a nested single quoted string'} inside it.")
$v2 = 3+4
notice("string with ${['an array ',3,'+',4,'=',$v2]} in it.")
notice("string with ${(3+5)/4} nested math ops in it.")
...and so forth.
The key changes in the internals are:
* Unification of SQTEXT and DQTEXT into a new token type STRING (since
nothing past the lexer cares about the distinction.
* Creation of several new token types to represent the components of
an interpolated string:
DQPRE The initial portion of an interpolated string
DQMID The portion of a string betwixt two interpolations
DQPOST The final portion of an interpolated string
DQCONT The as-yet-unlexed portion after an interpolation
Thus, in the example above (phantom curly braces added for clarity),
DQPRE "Hello ${
DQMID }, are you enjoying ${
DQPOST }?"
DQCONT is a bookkeeping token and is never generated.
* Creation of a DOLLAR_VAR token to strip the "$" off of variables
with explicit dollar signs, so that the VARIABLEs produced from
things like "Test ${x}" (where the "$" has already been consumed)
do not fail for want of a "$"
* Reworking the grammar rules in the obvious way
* Introduction of a "concatenation" AST node type (which will be going
away in a subsequent refactor).
Note finally that this is a component of a set of interrelated refactors,
and some of the changes around the edges of the above will only makes
sense in context of the other parts.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is my proposed attack on the lexing problem, with a few minor
cleanups to simplify its integration. The strategy:
* Anotate tokens with a method "acceptable?" that determines if
they can be generated in a given context. Have this default
to true.
* Give the lexer the notion of a context; initialize it and
update it as needed. The present context records the name of
the last significant token generated and a start_of_line flag.
* When a token is found to match, check if it is acceptable in
the present context before generating it.
These changes don't result any any change in behaviour but they
enable:
* Give the REGEX token an acceptable? rule that only permits a
regular expression in specific contexts.
The other changes were a fix to the scan bug Brice reported,
adjusting a test and clearing up some cluttered conditions in the
context collection path.
Added tests and subsumed change restricting REGEX to one line.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is not the real fix. It is just an hot-fix to limit
the issue.
The issue is that the lexer regexes have precedences over simple
'/' (divide).
In the following expression:
$var = 4096 / 4
$var2 = "/tmp/file"
The / 4... part is mis-lexed as a regex instead of a mathematical
expression.
The current fix limits regex to one-line.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
| |
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
|
|
|
|
|
|
| |
The lexer recognizes regex delimited by / as in:
/^$/
The match operator is defined by =~
The not match operator is defined by !~
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Due to the problem that we associate documentation in the lexer and
not in the parser (which would be to complex and unmaintenable to
do), and since the parser reads new tokens before reducing
the current statement (thus creating the AST node), we could
sometimes associate comments seen after a statement associated
to this one.
Ex:
1. $foo = 1
2. # doc of next class
3. class test {
When we parse the first line, the parser can reduce this to the
correct VarDef only after it lexed the CLASS token.
But lexing this token means we already pushed on the comment stack
the "doc of next class" comment.
That means at the time we create the AST VarDef node, the parser thinks
it should associate this documentation to it, which is incorrect.
As soon as the parser uses token line number, we can enhance the lexer
to allow comments to be associated to current AST node only if
the statement line number is greater or equal than the last comment
line number.
This way it is impossible to associate a comment appearing later in the
source than a previous statement.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Careful inspection of the parser code show that when we
associate a source line number for an AST node, we use the
current line number of the currently lexed token.
In many case, this is correct, but there are some cases where
this is incorrect.
Unfortunately due to how LALR parser works the ast node creation
of a statement can appear _after_ we lexed another token after
the current statement:
1. $foo = 1
2.
3. class test
When the parser asks for the class token, it can reduce the
assignement statement into the AST VarDef node, because no other
grammar rule match. Unfortunately we already lexed the class token
so we affect to the VarDef node the line number 3 instead of 1.
This is not a real issue for error reporting, but becomes a real
concern when we associate documentation comments to AST node for
puppetdoc.
The solution is to enhance the tokens lexed and returned to the parser
to carry their declaration line number.
Thus a token value becomes a hash: { :value => tokenvalue, :line }
Next, each time we create an AST node, we use the line number of
the correct token (ie the foo line number in the previous example).
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
|
|
|
|
|
| |
Comments and multi-line comments produces no token per-se during
lexing, so the lexer loops to find another token.
The issue was that we were not skipping whitespace after finding
such non-token.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
| |
|
|
|
|
|
|
|
| |
This involves lexing '::class' tokens along with correctly
looking them up from the Resource::Reference class.
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The lexer maintains a stack of last seen comments.
On blank lines the lexer flush the comments.
On each opening brace the lexer enters a new stack level.
On each block AST nodes, the stack is popped.
Each AST nodes has a doc property that is filled with the
last seen comments on node creation (in fact only on important node
creation representing statements).
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
| |
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
|
|
|
| |
Signed-off-by: Luke Kanies <luke@madstop.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The expressions can be used in if 'test' and in the
right side of assignements.
The expressions can contain any number of sub-expressions
combined by either arithmetic operators, comparison operators,
or boolean operators.
Random Usage Examples:
$result = ((( $two + 2) / $one) + 4 * 5.45) - (6 << 7) + (0x800 + -9)
or
if ($a < 10) and ($a + 10 != 200) {
...
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The append variable operator can be used to append something to
a variable defined in a parent scope, containing either a string
or an array.
The main use is to append array elements in classes to a variable
globally defined in a node.
Example:
$ssh_users = ['brice', 'admin1']
class backup {
$ssh_users += ['backup_operator']
...
}
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
|
| |
|
|
|
|
|
| |
including not compiling the configurations, and also storeconfigs
is no longer required during parse-testing.
|
|
|
|
|
|
| |
classes for managing how the tokens work.
I also moved they tests to RSpec, but I didn't rewrite all of them.
|
| |
|
| |
|
|
|
|
| |
lexer. Updated CLASSREF token regex in the lexer.
|
|
|
|
|
|
| |
parameters
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@2670 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
|
|
| |
notification of what was expected in most cases
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@2531 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
| |
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@2522 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
|
|
|
|
|
|
| |
http://mail.madstop.com/pipermail/puppet-users/2007-April/002398.html .
You can now retrieve qualified variables by specifying the full class path.
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@2393 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
|
|
| |
iterative evaluation, with collections being evaluated first. This way collections can find resources that either are inside defined types or are the types themselves.
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@1967 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
|
|
| |
significant rewrite of the parser, but it has little affect on the rest of the code tree.
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@1726 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
|
|
| |
#271.
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@1610 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
worth the priority I suddenly placed on them).
First, it adds search paths as I originally requested in #114. There is
now a 'lib' setting, which can be used to tell Puppet where to find
manifests. Any file you tell Puppet to parse will have its directory
automatically added to the lib path. Also, Puppet will check the
PUPPETLIB environment variable for further directories to search.
Second, it converts the 'import' mechanism into a normal function, which
means that you can now use variables and what-have-you in it. Of
course, this function uses the lib mechanism. This is something that's
always bothered me about the language, and having it fixed means you can
do simple things like have custom code in the top scope for each
operating system and then do "import os/$operatingsystem" to evaluate
that code. Without this, you would either need a huge case statement or
the code would need to be in a class, which often isn't sufficient.
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@1605 980ebf18-57e1-0310-9a29-db15c13687c0
|
|
|
|
|
|
| |
start, anyway.
git-svn-id: https://reductivelabs.com/svn/puppet/trunk@1483 980ebf18-57e1-0310-9a29-db15c13687c0
|