[Date Prev][Date Next][Thread Prev][Thread Next]
[Date Index]
[Thread Index]
- Subject: Re: Most awesome string metatable hack
- From: Andrew Starks <andrew.starks@...>
- Date: Sat, 1 Feb 2014 15:44:49 -0600
On Saturday, February 1, 2014, Patrick Donnelly <batrick@batbytes.com> wrote:
On Fri, Jan 31, 2014 at 10:00 AM, Andrew Starks <andrew.starks@trms.com> wrote:
>
> On Fri, Jan 31, 2014 at 6:33 AM, Fabien <fleutot+lua@gmail.com> wrote:
>>
>>
>>> print("hello %s" % { "world" }).
>
>
> Hmmm.... dangit!
>
> I like this better. Except for that it uses a table, but this is sweet. The
> extra parens kind of kill the syntactic beauty of the __call method. If not
> for that, it'd win in my book.
I think it would be an improvement if the % operator also accepted
closures in this case:
"Hello %s" % function() return "world" end
It's generally lighter weight than tables and, if you have closure
caching with unchanging upvalues, it can be constant cost.
--
Patrick Donnelly
It's all about in-the-moment checking of variables and such, for me. The syntax of concatenation takes too many quotes and dots and, actually, too many parens (due to precedence).
I'm not suggesting a change here. Im just saying that I find format to be much much faster mechanism than concat and the kind of mentality I'm in when I'm ripping off a little log hint or otherwise hacking around with strings, the more efficient the typing, the happier that I am.
string.format is, by a wide margin, the most used method in the string library. So I still like it for __call.
However, the perfect symmetry of the % operator cannot be denied. So, if this was a debate, I'd hand the gold metal to the Fabien / Python camp, table and all[1].
-Andrew
1: I just did my first bit if Lua profiling. I'm serializing and deserializing tables as messages with nanomsg on every frame. I made 0 attempt to optimize, including using tables to store real numbers (fractions) and my favorite bit of laziness: using penlight pretty.write and load for the serilazation.
Memory usage <sarcasm> EXPLODED</sarcasm> to almost one meg after 600 frames, when I turned garbage collection off. When I did full collection every frame it stayed at 500k. Either way, processor usage was consistently below the lowest time value that I could measure, which was 1/8th of a frame.
Some day I'll learn to stop worrying about tables.