Skip Menu |
 

This queue is for tickets about the Scalar-List-Utils CPAN distribution.

Report information
The Basics
Id: 132876
Status: open
Priority: 0/
Queue: Scalar-List-Utils

People
Owner: Nobody in particular
Requestors: HVDS [...] cpan.org
Cc:
AdminCc:

Bug Information
Severity: (no value)
Broken in: 1.55
Fixed in: (no value)



Subject: reduce failure (multicall?)
Download (untitled) / with headers
text/plain 1.1k
I want to reduce a list of arrayrefs of numbers to the longest common prefix. In my context, that seemed easiest to do destructively: <<END use strict; use warnings; use List::Util qw{ reduce }; use Data::Dumper; local $Data::Dumper::Indent = 0; my @all = ([2, 1], [2, 2], [2, 2, 1, 1, 3]); my $result = List::Util::reduce(sub { my @new; print \@new, " $a $b\n"; while (@$a && @$b) { my $la = shift @$a; my $lb = shift @$b; last if $la != $lb; push @new, $la; } \@new; }, @all); print Dumper($result), "\n"; END .. but whereas I'm expecting a result of [ 2 ], this outputs: ARRAY(0x561ed675e6f0) ARRAY(0x561ed65d35c0) ARRAY(0x561ed65ebf18) ARRAY(0x561ed675e6f0) ARRAY(0x561ed675e6f0) ARRAY(0x561ed65ebff0) $VAR1 = []; The diagnostic shows that on the second entry to the callback, the $a input is the same reference as \@new, which is clearly the source of the problem. This sounds similar in some aspects to the "known bug" RT #95409 from the documentation. I can work around it easily enough by returning C< [@new] > instead of C< \@new >, but it took a bunch of time to diagnose, and others may be even more confused than I was by this. Can it be fixed? Hugo
Download (untitled) / with headers
text/plain 2.7k
On Thu Jun 25 12:02:54 2020, HVDS wrote: Show quoted text
> Can it be fixed?
Yes and no. It largely depends what you mean "fixed". The various List::Util list-iteration functions all use a little-known core perl feature called MULTICALL, which is a performance enhancement for functions which just invoke a supplied sub body repeatedly, as reduce, any/all/etc.. all do. MULTICALL basically arranges to do the ENTERSUB parts of the call, but stops before the optree is actually executed, returning control to the XS function (e.g. reduce). That can then run the actual body repeatedly, before finally doing the LEAVESUB step just once at the end, after it has finished. Overall this performance optimisation aims to make the whole operation run faster, by not having to repeatedly ENTERSUB/LEAVESUB for every element. The downside of doing it that way is, as you observe, the other side-effects of ENTERSUB/LEAVESUB don't get to clean up inbetween, so things like pad reset don't happen. There's basically three ways around this: 1) Just stop using MULTICALL at all This would have quite the performance impact on any existing code currently using these functions. The entire reason MULTICALL was originally invented was to avoid this. 2) Recreate the pad-clearing steps of ENTERSUB/LEAVESUB into the XS functions such as reduce, any/all,... We'd want to study the performance impact of even doing this. Obviously, the more that these functions have to recreate differently from what ENTERSUB/LEAVESUB do, the more code duplication there is and the worse the performance gets. Taken to the ultimate extreme, we'd end up copying all of the logic and thus be no better than option 1 in terms of performance, but at a significant downside to code complexity. 0) Leave it as it is, and document "don't do this" Currently we have taken option 0. If you think we should take options 1 or 2 instead, I'd first like to see some benchmarking of the relative performance overheads those would entail; especially as they are likely to be dominant factors in the typical small cases such as any { $_ eq "something } @items There is, on the horizon, a third option. I have plans to create a better version of these list utilities which would operate on a lower level, being parsed as real keywords and operating at the same level as e.g. map and grep, so the BLOCK is not a separate function and thus no ENTERSUB/LEAVESUB logic needs to take place. It wouldn't be possible to simply provide these instead of the regular ones, because code such as reduce { return $a+$b } @numbers would suddenly behave very differently. Instead, they'd be in a new XS module in a new name, with the eventual hope to become real core syntax eventually. -- Paul Evans
Download (untitled) / with headers
text/plain 1.1k
On Mon Jul 06 09:29:57 2020, PEVANS wrote: Show quoted text
> On Thu Jun 25 12:02:54 2020, HVDS wrote:
> > Can it be fixed?
> > Yes and no. It largely depends what you mean "fixed".
My preference would be for the default implementation to prefer correctness over speed. Maybe it would be possible to provide dual implementations, such that you get the non-multicall variant unless you specifically ask for the faster but more restrictive version. I confess I have no idea how much work that would involve, but making the user ask for something dangerous before handing it out seems like a sensible approach. I don't think (2) makes sense as is - if there is a useful halfway house that keeps some multicall benefits while offering more correctness, I think core would be the only sane place for that. Option (3) sounds promising. Without that, I think it's a big problem that you cannot warn at compile- or run-time that the block will not behave correctly, and it sounds like it would also be very hard to comprehensively document the restrictions such that a user (probably in retrospect) has a chance of understanding why a particular block won't work, and how to fix it. Hugo


This service is sponsored and maintained by Best Practical Solutions and runs on Perl.org infrastructure.

Please report any issues with rt.cpan.org to rt-cpan-admin@bestpractical.com.