Skip Menu |
 

Preferred bug tracker

Please visit the preferred bug tracker to report your issue.

This queue is for tickets about the devel-nytprof CPAN distribution.

Report information
The Basics
Id: 75912
Status: resolved
Priority: 0/
Queue: devel-nytprof

People
Owner: Nobody in particular
Requestors: zefram [...] fysh.org
Cc:
AdminCc:

Bug Information
Severity: (no value)
Broken in: (no value)
Fixed in: (no value)



Subject: disagrees with Apache2::SizeLimit
Date: Tue, 20 Mar 2012 14:06:47 +0000
To: bug-Devel-NYTProf [...] rt.cpan.org
From: Zefram <zefram [...] fysh.org>
Download (untitled) / with headers
text/plain 5.9k
I'm using Devel::NYTProf::Apache and I'm finding that every Apache process that's actually processed any requests (and from which I therefore want profiling data) produces an nytprof.out file that the tools declare to be truncated. These processes are exiting via Apache2::SizeLimit, and apparently that prevents the profiling data being written out. Versions: Apache/2.2.16, mod_perl/2.0.5, Perl/v5.12.3, Devel::NYTProf 4.06. It's a preforking Apache. I have this in httpd.conf: PerlPassEnv NYTPROF PerlModule Devel::NYTProf::Apache I start and stop Apache via apachectl. For test purposes here I'm making a single HTTP request between start and stop. I use an environment variable along the lines of NYTPROF=stmts=0:endatexit=1:clock=2:addpid=1:file=/tmp/nytprof.t7.out In the basic non-working case, after stopping Apache I have a set of output files like this: -rw-rw-r-- 1 root root 1967588 Mar 20 13:33 /tmp/nytprof.t7.out.27171 -rw-rw-r-- 1 root root 1966877 Mar 20 13:35 /tmp/nytprof.t7.out.27179 -rw-rw-r-- 1 www-data www-data 6332 Mar 20 13:34 /tmp/nytprof.t7.out.27187 -rw-rw-r-- 1 www-data www-data 1752545 Mar 20 13:35 /tmp/nytprof.t7.out.27188 -rw-rw-r-- 1 www-data www-data 1752552 Mar 20 13:35 /tmp/nytprof.t7.out.27189 -rw-rw-r-- 1 www-data www-data 1752543 Mar 20 13:35 /tmp/nytprof.t7.out.27190 -rw-rw-r-- 1 www-data www-data 1752545 Mar 20 13:35 /tmp/nytprof.t7.out.27191 -rw-rw-r-- 1 www-data www-data 1752542 Mar 20 13:35 /tmp/nytprof.t7.out.27192 -rw-rw-r-- 1 www-data www-data 1752544 Mar 20 13:35 /tmp/nytprof.t7.out.27193 -rw-rw-r-- 1 www-data www-data 1752548 Mar 20 13:35 /tmp/nytprof.t7.out.27194 -rw-rw-r-- 1 www-data www-data 1752544 Mar 20 13:35 /tmp/nytprof.t7.out.27195 -rw-rw-r-- 1 www-data www-data 1752549 Mar 20 13:35 /tmp/nytprof.t7.out.27196 -rw-rw-r-- 1 root root 1966916 Mar 20 13:35 /tmp/nytprof.t7.out.27280 -rw-rw-r-- 1 www-data www-data 1752544 Mar 20 13:35 /tmp/nytprof.t7.out.27282 nytprofmerge says that the short file is truncated and the others are all OK. The short one is from the child process that handled the request, and it exited due to size limit: [Tue Mar 20 13:34:58 2012] (27187) Apache2::SizeLimit httpd process too big, exiting at SIZE=305048 KB SHARE=3796 KB UNSHARED=301252 REQUESTS=2 LIFETIME=0 seconds Docs (and Tim on IRC) suggest that DB::finish_profile() isn't getting called. I tried various ways of triggering it. Adding my own END block in a <Perl> section didn't work. $NYTPROF additions for sigexit and posix_exit didn't work. Setting up a PerlChildExitHandler didn't work. But a PerlChildCleanupHandler did work: <Perl> use Apache2::ServerUtil (); use Apache2::Const qw(-compile OK); Apache2::ServerUtil->server->push_handlers(PerlCleanupHandler => sub { DB::finish_profile(); return Apache2::Const::OK; }); </Perl> Of course, this fires after a single request, so each child process is then only good for one request. The result is a sane-looking set of output files: -rw-rw-r-- 1 root root 1968031 Mar 20 13:26 /tmp/nytprof.t6.out.26859 -rw-rw-r-- 1 root root 1967348 Mar 20 13:26 /tmp/nytprof.t6.out.26867 -rw-rw-r-- 1 www-data www-data 2151871 Mar 20 13:26 /tmp/nytprof.t6.out.26875 -rw-rw-r-- 1 www-data www-data 1752558 Mar 20 13:26 /tmp/nytprof.t6.out.26876 -rw-rw-r-- 1 www-data www-data 1752561 Mar 20 13:26 /tmp/nytprof.t6.out.26877 -rw-rw-r-- 1 www-data www-data 1752556 Mar 20 13:26 /tmp/nytprof.t6.out.26878 -rw-rw-r-- 1 www-data www-data 1752562 Mar 20 13:26 /tmp/nytprof.t6.out.26879 -rw-rw-r-- 1 www-data www-data 1752557 Mar 20 13:26 /tmp/nytprof.t6.out.26880 -rw-rw-r-- 1 www-data www-data 1752557 Mar 20 13:26 /tmp/nytprof.t6.out.26881 -rw-rw-r-- 1 www-data www-data 1752564 Mar 20 13:26 /tmp/nytprof.t6.out.26882 -rw-rw-r-- 1 www-data www-data 1752555 Mar 20 13:26 /tmp/nytprof.t6.out.26883 -rw-rw-r-- 1 www-data www-data 1752560 Mar 20 13:26 /tmp/nytprof.t6.out.26884 -rw-rw-r-- 1 www-data www-data 1752554 Mar 20 13:26 /tmp/nytprof.t6.out.26917 -rw-rw-r-- 1 root root 1967463 Mar 20 13:26 /tmp/nytprof.t6.out.26928 nytprofmerge is happy about these files. The most successful workaround turned out to be to comment out the Apache2::SizeLimit configuration. With this, and no END block or handler, I get output files that nytprofmerge is happy with. So apparently D:N:Apache is failing to intercept the mechanism that Apache2::SizeLimit uses to trigger a child exit. It's not doing anything obviously unclean, like _exit() (which D:N would presumably have caught, given the posix_exit option). It's calling $r->child_terminate, supplied by Apache2::RequestUtil, documented as "Terminate the current worker process as soon as the current request is over". It seems a bug that D:N:Apache wouldn't play nicely with that. Stepping back a bit, I have a suggestion regarding saving profiling data. You ought to save it more often, and specifically it ought to be possible to save data without stopping collection of profiling data, and it ought to be possible to save more than once in one process. Each save probably needs to go to a separate file. That's not a problem, because you need to merge output files anyway. If that's possible, it gets a lot easier to work with Apache. You can set up a PerlCleanupHandler, running after each request, that saves data without stopping profiling. This means you can always pick up data covering every request processed so far, regardless of the current state of child processes. You then lose very little if you fail to trap child termination. Processes can be killed uncleanly and you've still got profiling data from everything they did except what they were working on at the time of killing. The home-grown profiler that I previously used in this job, which I'm now trying to replace with D:N to avoid the overhead (because I wrote it in Perl), works this way. I never got any truncated files (it doesn't start writing out a file until it's going to write the complete file). I never lost significant amounts of data with it. -zefram
Download (untitled) / with headers
text/plain 1.8k
On Tue Mar 20 10:07:04 2012, zefram@fysh.org wrote: Show quoted text
> These processes are exiting via Apache2::SizeLimit, and apparently that prevents the profiling data being written out.
Great. I'm glad you found the problem. Show quoted text
> Docs (and Tim on IRC) suggest that DB::finish_profile() isn't getting > called. I tried various ways of triggering it. Adding my own END > block > in a <Perl> section didn't work. $NYTPROF additions for sigexit and > posix_exit didn't work. Setting up a PerlChildExitHandler didn't > work. > But a PerlChildCleanupHandler did work: > > <Perl> > use Apache2::ServerUtil (); > use Apache2::Const qw(-compile OK); > Apache2::ServerUtil->server->push_handlers(PerlCleanupHandler => sub > { > DB::finish_profile(); > return Apache2::Const::OK; > }); > </Perl> > > Of course, this fires after a single request, so each child process is > then only good for one request.
You said "PerlChildCleanupHandler did work" but the code you quote is using "PerlCleanupHandler". Show quoted text
> It's calling $r->child_terminate, supplied > by Apache2::RequestUtil, documented as "Terminate the current worker > process as soon as the current request is over". It seems a bug that > D:N:Apache wouldn't play nicely with that.
For now I'm going to magically turn it into a feature by waving the documentation wand. Patches are welcome of course. Show quoted text
> Stepping back a bit, I have a suggestion regarding saving profiling > data. > You ought to save it more often, and specifically it ought to be > possible > to save data without stopping collection of profiling data, and it > ought > to be possible to save more than once in one process. Each save > probably > needs to go to a separate file. That's not a problem, because you > need > to merge output files anyway.
It is possible, to some extent. See http://search.cpan.org/~timb/Devel-NYTProf/lib/Devel/NYTProf.pm#RUN- TIME_CONTROL_OF_PROFILING Tim.
Download (untitled) / with headers
text/plain 460b
I've addressed this, at least partly, in r1413 with a warning if SizeLimit is detected and assorted changes to the docs. I'd appreciate some feedback on the code you previously quoted where you said "PerlChildCleanupHandler did work" but the code you quote is using "PerlCleanupHandler". Does changing that to PerlChildCleanupHandler fix it? I'll leave this ticket open as a reminder that I'd like to add code to intercept the child_terminate() function.
Subject: Re: [rt.cpan.org #75912] disagrees with Apache2::SizeLimit
Date: Thu, 20 Jun 2013 11:36:47 +0100
To: Tim_Bunce via RT <bug-devel-nytprof [...] rt.cpan.org>
From: Zefram <zefram [...] fysh.org>
Download (untitled) / with headers
text/plain 525b
Tim_Bunce via RT wrote: Show quoted text
>I'll leave this ticket open as a reminder that I'd like to add code to intercept the child_terminate()
child_terminate is defined in mod_perl in a way that doesn't leave much room for interception. $r->child_terminate registers a cleanup handler; when that handler is triggered (during cleanup of the current request) it registers another cleanup handler; when *that* handler executes (after all the normal cleanup handlers) it performs exit(0). You'll need atexit(3) to intercept that. -zefram
Subject: Re: [rt.cpan.org #75912] disagrees with Apache2::SizeLimit
Date: Thu, 20 Jun 2013 11:47:43 +0100
To: Tim_Bunce via RT <bug-devel-nytprof [...] rt.cpan.org>
From: Zefram <zefram [...] fysh.org>
Download (untitled) / with headers
text/plain 599b
Tim_Bunce via RT wrote: Show quoted text
>On Tue Mar 20 10:07:04 2012, zefram@fysh.org wrote:
>> You ought to save it more often, and specifically it ought to be possible >> to save data without stopping collection of profiling data, and it ought >> to be possible to save more than once in one process.
> >It is possible, to some extent. See http://search.cpan.org/~timb/Devel-NYTProf/lib/Devel/NYTProf.pm#RUN- >TIME_CONTROL_OF_PROFILING
That allows the collection of profiling data to be started and stopped repeatedly. It doesn't allow repeated *saving* of profiling data, which is what's needed here. -zefram
Subject: Re: [rt.cpan.org #75912] disagrees with Apache2::SizeLimit
Date: Thu, 20 Jun 2013 12:26:02 +0100
To: Tim_Bunce via RT <bug-devel-nytprof [...] rt.cpan.org>
From: Zefram <zefram [...] fysh.org>
Attached patch implements optional hooking of libc's exit(3). -zefram
Download libcexit.patch
text/x-diff 1.3k

Message body is not shown because sender requested not to inline it.

Patch applied. Thanks!


This service is sponsored and maintained by Best Practical Solutions and runs on Perl.org infrastructure.

Please report any issues with rt.cpan.org to rt-cpan-admin@bestpractical.com.