[Date Prev][Date Next][Thread Prev][Thread Next]
- Subject: RE: Python and Lua
- From: "Luiz Carlos Silveira" <luiz@...>
- Date: Tue, 25 Apr 2000 16:04:27 -0300
From: firstname.lastname@example.org [mailto:email@example.com]On
Behalf Of Luiz Henrique de Figueiredo
Sent: Tuesday, April 25, 2000 11:38 AM
To: Multiple recipients of list
Subject: RE: Python and Lua
[I'm reposting this message of mine because it does not seem to have been
distributed, even though it made into the archive! --lhf]
>From: "Martin Dvorak" <firstname.lastname@example.org>
>I still can't get rid of a feeling that more tag methods
>means less performance
The number of tag methods does not affect performance.
The use of tag methods does, but only slightly (essentially one more C call
and one more Lua call, plus of course whatever you do in the tag method itself).
As I have said before in this list, I sympathize with the concerns about
performance, but it's notorious that we programmers are very bad at detecting
performance bottlenecks. This is even more true for interpreted languages.
So, the only way to know is to measure the time taken by your programs.
In any case, Lua is one of the fastest languages around, and 4.0 is even faster
I´m working on a profiler (to measure the execution time of each lua function). There is almost an alpha version of it, as soon as I
fix some implementation problems - which prevents you from analyzing the profiler log file of a very complex execution - and give a
the information. Basically the profiler has two parts: one to create a file with the local time and the total time (time spent in
the function code and time spent among all subcalls it produces) of *each called function* (if a function is called 20 times, 20
lines will be created, no calls, no lines) which you can analyze in your favourite spreadsheet; the other part is the built-in
analyzer, which allows you to trace the execution of your program and choose which function you would like to see the times for,
giving you some statistics - this is useful if the times for a function depends mostly on their parameters, for example, a function
to make a query to a database (your bottleneck may not be your function, but their parameters, if the query is inefficient).
I don´t know yet if it compensates to have a log file that big (each line is about 60 chars), or I should elaborate more and make
something like gprof (the GNU profiler) which generates a log of about 500 times shorter but only give you average times. But I
guess I´m planning some "compensating" stuff for the analyzer ;)