Skip Menu | will be shut down on March 1st, 2021.

This queue is for tickets about the IO-Socket-IP CPAN distribution.

Report information
The Basics
Id: 103422
Status: open
Priority: 0/
Queue: IO-Socket-IP

Owner: Nobody in particular
Requestors: w.phillip.moore [...]
Cc: ether [...]

Bug Information
Severity: Important
Broken in: 0.37
Fixed in: (no value)

Subject: IO::Socket::IP and IO::Socket::SSL do not play well together
Download (untitled) / with headers
text/plain 3.1k
I'm using the latest IO::Socket::SSL 2.012 with perl 5.20.1, which includes IO::Socket::IP 0.29, and I've discovered that if IO::Socket::SSL::connect fails, then the error is never displayed. I have a rather complex application with uses IO::Socket::SSL with a lot of autogenerated and automanaged SSL certificates, and it started to fail use to what was an issue with how one of the certificates was being created. However, it took me quite a while to figure this out, because failed calls to IO::Socket::SSL->new() were returning undef, and not setting either $! or $@, but instead the very non-standard $SSL_ERROR, which had this value: IO::Socket::IP configuration failed In years of developing IO::Socket::SSL apps I had never seen this one before. I did some digging, and found that the problem is that IO::Socket::IP makes an assumption about IO::Socket::SSL::connect which is not true. Namely, that if it fails, it will set $!, which I don't think has ever been true. The offending code is in IO::Socket::IP::setup 603 if( defined( my $addr = $info->{peeraddr} ) ) { 604 if( $self->connect( $addr ) ) { 605 $! = 0; 606 return 1; 607 } 608 609 if( $! == EINPROGRESS or HAVE_MSWIN32 && $! == Errno::EWOULDBLOCK() ) { 610 ${*$self}{io_socket_ip_connect_in_progress} = 1; 611 return 0; 612 } 613 614 ${*$self}{io_socket_ip_errors}[0] = $!; 615 next; 616 } When IO::Socket::SSL::connect fails, $! is an empty string, which is defined, so at the end of the subroutine: 621 # Pick the most appropriate error, stringified 622 $! = ( grep defined, @{ ${*$self}{io_socket_ip_errors}} )[0]; 623 $@ = "$!"; 624 return undef; But that results in both $! and $@ being the empty string, and the real error is lost. In my case, the real error, found only by stepping through the code in the debugger, is: SSL connect attempt failed error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed The root cause of that problem is an issue in my test suite, so I own that. However, it would really be nice if IO::Socket::SSL and IO::Socket::IP worked together correctly, especially since IO::Socket::SSL prefers IO::Socket::IP it if is available. IO::Socket::SSL also has no means of controlling which IO::Socket::* class gets used, so users of perl5.20 end up being forced to use IO::Socket::IP. I'm going to post this in the IO::Socket::SSL queue as well, and hopefully I can get the two authors to collaborate on the best solution. Personally, I think the fact that IO::Socket::SSL sets a totally non-standard error variable is a serious mistake in that code, since it means that code like IO::Socket::IP has to handle that as a special case. I think it would be cleanest if IO::Socket::SSL::connect set $!, or $@, in additional to $SSL_ERROR whenever the latter is true. I haven't yet chased down why I'm seeing $SSL_ERROR but not $!, so I don't know if that is expected behavior or not. Now to figure out why my certs are broken.....
Download (untitled) / with headers
text/plain 175b
FWIW, I took a quick look at the latest version (0.37), and that particular code is unchanged, so I have every reason to believe this is still and issue in the latest version.
Download (untitled) / with headers
text/plain 820b
At a glance it's hard to see how to solve this one. IO::Socket::IP itself is built atop IO::Socket, whose API is that errors are indicated by a side-effecting mutation of $! and the erroring method then returns some sentinel failure condition. That is the interface it's built for. Now, when IO::Socket::SSL is layered on top of it, IO::Socket::IP's code is now calling its ->connect method, not the underlying IO::Socket one. If that ->connect has different error-signalling semantics, then this is why the errors aren't visible to IO::Socket::IP and hence to its caller. To find a way to solve this, we'd need to get IO::Socket (from GBARR's IO dist) and IO::Socket::SSL (from SULLR) to agree to use the same mechanism for error reporting. Then IO::Socket::IP would have one place to look, not two. -- Paul Evans

This service is sponsored and maintained by Best Practical Solutions and runs on infrastructure.

Please report any issues with to