|Subject:||Error opening ErrFile with PerlIO_findFILE|
|Date:||Mon, 02 Feb 2015 11:02:01 -0500|
|To:||bug-BerkeleyDB [...] rt.cpan.org|
|From:||David Mansfield <david [...] cobite.com>|
We have stumbled across an issue (not sure whether it is actually a bug or not!) pertaining to the opening of the ErrFile in the XS code. In our case, we have duped a pipe's write filehandle to replace STDOUT and STDERR, and we're passing: -ErrFile => *STDERR It reduces to the fact that PerlIO_findFILE is returning NULL in the GetFILEptr macro (and USE_PERLIO is defined). Breaking it down during debugging, the pointer returned by the inner functions, IoIFP(sv_2io(sv)), is valid, but the above method returns NULL. The "documentation" of the method PerlIO_findFILE says: "Returns previously 'exported' FILE * (if any)." There is another method on the man page, PerlIO_exportFILE, which is defined as: "Given an PerlIO * return a 'native' FILE * suitable for passing to code expecting to be compiled and linked with ANSI C stdio.h" I tried substituting exportFILE and it is now working - but I'm not sure of the impact. Maybe if findFILE returns NULL, then exportFILE should be tried ? I don't know the semantics for this. As to why this is happening now, I cannot explain it. We have half a dozen systems across different Centos releases running with the exact same code and on a new system, running the exact same versions of everything, this is happening. The only thing about the new system is that it has a lot of CPU cores - so a race condition seems the most likely culprit. I can only assume something is changed about the state of the perlio layers that causes this behavior change, but I cannot figure out what. -- Thanks, David Mansfield Cobite, INC.