Skip Menu |

This queue is for tickets about the POE-Component-Client-HTTP CPAN distribution.

Report information
The Basics
Id: 41312
Status: resolved
Priority: 0/
Queue: POE-Component-Client-HTTP

Owner: Nobody in particular
Requestors: me+cpan [...]

Bug Information
Severity: (no value)
Broken in: 0.86
Fixed in: (no value)

Subject: HTTP TImeout and Keepalive timeout are confusing.
Download (untitled) / with headers
text/plain 198b
PATCHES FOR GREAT JUSTICE. a) code: makes it so Keepalive's timeout can never exceed HTTP's timeout. b) doc: cleans up the differences between an HTTP time out the modules timeout c) test for code
Subject: patch.p
Download patch.p
text/x-pascal 2.9k
:100644 100644 d4b43ae... 0000000... M Component/Client/ diff --git a/Component/Client/ b/Component/Client/ index d4b43ae..abb32af 100644 --- a/Component/Client/ +++ b/Component/Client/ @@ -16,6 +16,7 @@ $VERSION = '0.85'; use Carp qw(croak); use HTTP::Response; use Net::HTTP::Methods; +use Scalar::Util qw(); use POE::Component::Client::HTTP::RequestFactory; use POE::Component::Client::HTTP::Request qw(:states :fields); @@ -111,6 +112,16 @@ sub spawn { my $bind_addr = delete $params{BindAddr}; my $cm = delete $params{ConnectionManager}; + ## Stops one sort of easily detectable confusion about timeout + if ( + Scalar::Util::blessed $cm eq 'POE::Component::Client::Keepalive' + && defined $params{'Timeout'} && $params{'Timeout'} < $cm->[POE::Component::Client::Keepalive::SF_TIMEOUT] + ) { + die "ERROR: Client::HTTP's 'Timeout' is shorter than Client::Keepalive's 'timeout'" + . ' set a longer timeout in Client::HTTP, or a shorter one in Client::Keepalive' + ; + } + my $request_factory = POE::Component::Client::HTTP::RequestFactory->new( \%params ); @@ -1201,10 +1212,33 @@ If redirects are followed, a response chain should be built, and can be accessed through $response_object->previous(). See HTTP::Response for details here. -=item Timeout => $query_timeout - -C<Timeout> specifies the amount of time a HTTP request will wait for -an answer. This defaults to 180 seconds (three minutes). +=item Timeout => $HTTP::Client's request timeout (not HTTP-timeout) + +C<Timeout> specifies the amount of time, in seconds, this module will +wait before expiring a request. B<This *includes* but is not simply +the HTTP_Timeout.> If you use L<POE::Component::Client::Keepalive>, +you can for instance queue up 15,000 requests to L<POE::Component::Client::HTTP> +but those 15,000 requests might be constrainted by limits specified +in the keepalive pool. It is very possible with sufficent requests +for requests to expire in the client, before transmission can even +begin... Take for instance the following scenario: in the pool you +specify 1 for the max_open, and max_per_host paramateres. And, you +make 100 kernel posts to the Client::HTTP, the Timeout timer begins +when those 100 kernel posts are received by Client::HTTP not when the +HTTP request is sent. If they all arrive in Client::HTTP in 1 real +second then all 100 requests must finish $Timeout + 1 real second. + +If something expires because it violates the Client::HTTP timeout +the end-user will see a pseudo-server-response generated with an +HTTP response code of 500, with: + + connect error 110: Connection timed out + +The utility of a client-timeout is fairly limited, chances are if +you need a timeout, you want the "timeout" parameter in +L<POE::Component::Client::Keepalive> -- which refers to the HTTP +timeout. The timeout in the pool, can never be longer than the +timeout in HTTP::Client. =back
text/x-perl 455b
#!/usr/binenv perl use strict; use POE qw(Component::Client::HTTP Component::Client::Keepalive); use Test::More tests=>1; open 'STDERR', '/dev/null'; eval { my $pool = POE::Component::Client::Keepalive->new( max_per_host => 4 , max_open => 16 , timeout => 20000 , keep_alive => 5 ); POE::Component::Client::HTTP->spawn( Timeout => 10000 , ConnectionManager => $pool ); }; like ( $@, qr/shorter/, "Test of timeouts" );
Clarifications to Timeout applied in revision 355.

This service is sponsored and maintained by Best Practical Solutions and runs on infrastructure.

Please report any issues with to