Skip Menu |
 

This queue is for tickets about the SVN-Web CPAN distribution.

Report information
The Basics
Id: 82550
Status: stalled
Priority: 0/
Queue: SVN-Web

People
Owner: Nobody in particular
Requestors: DRUOSO [...] cpan.org
Cc:
AdminCc:

Bug Information
Severity: Important
Broken in: 0.53
Fixed in: (no value)



Subject: Prevent timeouts in large diffs
Download (untitled) / with headers
text/plain 608b
SVN::Web currently tries to prevent problems by prevent rendering diffs longer than 200k, but it still reads the diff to a temporary file and slurps it into memory before deciding that it shouldn't do that. The attached patch forks before invoking the diff, and have that sent to a pipe, which will be closed if the output is bigger than the limit. It also adds a alarm of 5 seconds to timeout if the diff calculation takes too long. The current code can clog tmp if the amount of concurrent requests in branch create operations of large repositories (since the branch creation shows all the files as add).
Subject: svnweb-largediff.patch
diff -uNard svnweb-0.53.orig/lib/perl5/SVN/Web/Diff.pm svnweb-0.53/lib/perl5/SVN/Web/Diff.pm --- svnweb-0.53.orig/lib/perl5/SVN/Web/Diff.pm 2013-01-07 18:06:46.197629000 -0500 +++ svnweb-0.53/lib/perl5/SVN/Web/Diff.pm 2013-01-07 18:06:37.116919000 -0500 @@ -209,31 +209,65 @@ $mime eq 'text/html' and $style = 'Text::Diff::HTML'; $mime eq 'text/plain' and $style = 'Unified'; - my($out_h, $out_fn) = File::Temp::tempfile(); - my($err_h, $err_fn) = File::Temp::tempfile(); + open my $err_h, '>', '/dev/null'; + my ($diff_pipe_out, $diff_pipe_in); + pipe($diff_pipe_out, $diff_pipe_in); + my $diff_forked_out = fork(); + if (! defined $diff_forked_out) { + die "Failed to fork"; + } elsif ($diff_forked_out == 0) { + open STDOUT, '>', '/dev/null'; + open STDERR, '>', '/dev/null'; + close $diff_pipe_out; + eval { + $ctx->diff([], "$uri$path", $rev1, "$uri$path", $rev2, + 0, 1, 0, $diff_pipe_in, $err_h); + }; + exit; + } + close $diff_pipe_in; - $ctx->diff([], "$uri$path", $rev1, "$uri$path", $rev2, - 0, 1, 0, $out_h, $err_h); + alarm 5; + local $SIG{ALRM} = sub { + kill 2, $diff_forked_out; + }; + my $max_diff_size = $self->{opts}{max_diff_size}; + my $total_diff_read = 0; + my $interrupted_diff = 0; + my $buff; my $out_c; - local $/ = undef; - seek($out_h, 0, 0); - $out_c = <$out_h>; - - unlink($out_fn); - unlink($err_fn); - close($out_h); - close($err_h); + while (1) { + use bytes; + my $bytes_read = read $diff_pipe_out, $buff, 200000; # read 200k (which is the max_diff_size); + if (defined $bytes_read && $bytes_read >= 0) { + if ($bytes_read == 0) { + last; + } elsif (($bytes_read + $total_diff_read) < $max_diff_size) { + $out_c .= $buff; + $total_diff_read += $bytes_read; + } else { + $interrupted_diff = 1; + last + } + } else { + $interrupted_diff = 1; + last; + } + } + close $diff_pipe_out; + close $err_h; + waitpid 0, $diff_forked_out; + alarm 0; - my $diff_size = length($out_c); - my $max_diff_size = $self->{opts}{max_diff_size}; + my $diff_size = $total_diff_read; if($mime eq 'text/html') { use SVN::Web::DiffParser; my $diff; my $diff_size = length($out_c); my $max_diff_size = $self->{opts}{max_diff_size} || 0; - if($diff_size <= $max_diff_size) { + if($diff_size <= $max_diff_size && !$interrupted_diff) { $diff = SVN::Web::DiffParser->new($out_c); } diff -uNard svnweb-0.53.orig/lib/perl5/SVN/Web/Revision.pm svnweb-0.53/lib/perl5/SVN/Web/Revision.pm --- svnweb-0.53.orig/lib/perl5/SVN/Web/Revision.pm 2013-01-07 18:06:46.199621000 -0500 +++ svnweb-0.53/lib/perl5/SVN/Web/Revision.pm 2013-01-07 18:06:37.121909000 -0500 @@ -218,24 +218,60 @@ my $diff; my $diff_size = 0; + if($self->{opts}{show_diff}) { - my($out_h, $out_fn) = File::Temp::tempfile(); - my($err_h, $err_fn) = File::Temp::tempfile(); - - $ctx->diff([], $uri, $rev - 1, $uri, $rev, 1, 1, 0, $out_h, $err_h); + open my $err_h, '>', '/dev/null'; + my ($diff_pipe_out, $diff_pipe_in); + pipe($diff_pipe_out, $diff_pipe_in); + my $diff_forked_out = fork(); + if (! defined $diff_forked_out) { + die "Failed to fork"; + } elsif ($diff_forked_out == 0) { + open STDOUT, '>', '/dev/null'; + open STDERR, '>', '/dev/null'; + close $diff_pipe_out; + eval { + $ctx->diff([], $uri, $rev - 1, $uri, $rev, + 1, 1, 0, $diff_pipe_in, $err_h); + }; + exit; + } + close $diff_pipe_in; - my $out_c; - local $/ = undef; - seek($out_h, 0, 0); - $out_c = <$out_h>; + alarm 5; + local $SIG{ALRM} = sub { + kill 2, $diff_forked_out; + }; - unlink($out_fn); - unlink($err_fn); - close($out_h); - close($err_h); + my $total_diff_read = 0; + my $interrupted_diff = 0; + my $buff; + my $out_c; + while (1) { + use bytes; + my $bytes_read = read $diff_pipe_out, $buff, 200000; # read 200k (which is the max_diff_size); + if (defined $bytes_read && $bytes_read >= 0) { + if ($bytes_read == 0) { + last; + } elsif (($bytes_read + $total_diff_read) < $max_diff_size) { + $out_c .= $buff; + $total_diff_read += $bytes_read; + } else { + $interrupted_diff = 1; + last + } + } else { + $interrupted_diff = 1; + last; + } + } + close $diff_pipe_out; + close $err_h; + waitpid 0, $diff_forked_out; + alarm 0; - $diff_size = length($out_c); - if($diff_size <= $max_diff_size) { + $diff_size = $total_diff_read; + if ($diff_size <= $max_diff_size) { $diff = SVN::Web::DiffParser->new($out_c); } }
Download (untitled) / with headers
text/plain 970b
This is a great patch, but its against a historic version of SVN::Web. I would love to include it, but its not non-trivial to make it apply. Can I ask you to make the changes against 0.63 and submit again? Please fork and submit via github if you like, https://github.com/djzort/SVN-Web On Mon Jan 07 18:15:26 2013, DRUOSO wrote: Show quoted text
> SVN::Web currently tries to prevent problems by prevent rendering diffs > longer than 200k, but it still reads the diff to a temporary file and > slurps it into memory before deciding that it shouldn't do that. > > The attached patch forks before invoking the diff, and have that sent to > a pipe, which will be closed if the output is bigger than the limit. It > also adds a alarm of 5 seconds to timeout if the diff calculation takes > too long. > > The current code can clog tmp if the amount of concurrent requests in > branch create operations of large repositories (since the branch > creation shows all the files as add).
Download (untitled) / with headers
text/plain 108b
I doubt youre still interested in this. I will leave this as stalled in case anyone wants to submit a patch.


This service is sponsored and maintained by Best Practical Solutions and runs on Perl.org infrastructure.

Please report any issues with rt.cpan.org to rt-cpan-admin@bestpractical.com.